Turns out that there wasn't so much room on the shoulders of giants after all.
Otherwise, unemployed people would have to depend upon state welfare, and that is unreliable. We'll see lots of innovation in the field of self sustainable living.
People don't magically turn to farming when they lose their jobs. It requires land, skill, and capital.
When we reach mass unemployment, we'll either have to provide for everyone or have a revolution.
To me, this seems like the default future if we don't actively prevent it.
The homeless in a sad way, go back to human roots and labor as "gathers" but primarily gathering refuse.
I see a lot of homelessness now. There is a significant amount of innovation but it remains a miserable, dehumanizing and short existence.
And if you have eg some chicken dung, it works like a natural fertilizer.
Gen manipulated seeds from Monsanto are only good for one year and only work with fertilizer.
"Homelessness isn't a problem, you just need some seeds. Ergo if the homeless are starving they must be too lazy or maybe they spend all their money on drugs so they can't even afford a few seeds!"
Try being homeless, then go back and tell us how "easy" it is.
(Speaking as someone who has been homeless before).
I tried "draw a 3d cube", apparently it doesn't have any 3d java libraries baked in, but it did give me a bunch of 2d APIs, and then "plot a math function", giving me some trig functions directly and some plotting functions.
That would probably have saved me 80% of my time looking stuff up, especially in such a large search space
OK then, humans still needed. Best case, it seems AI will take up what one would consider interesting work (algorithms, thinking) and humans will end up doing grunt work -- testing out bugs, cleaning up data, formatting data, explaining to other humans etc.
Realistically though, this just creates two API's (instead of one) for humans to master, the original API and the 99% accurate machine API and knowing where the gap/bugs are.
labeling training examples for deep learning
Not really, it's one mega-mecha-meta-API instead of hundreds of disparate small ones.
So you'd need to be able to find API bugs, I agree with that, but overall you'd probably need less knowledge. Especially if a large number of humans use this DeepAPI system, the bugs can be fixed relatively quickly.
Also of note from section 5.2: SWIM uses Bing clickthrough data to build the model.
Using a better (or simply more used) search engine like google search would likely improve the SWIM results.
EDIT: The method they use to compare the methods is BLEU which stands for Bi-Lingual Evaluation Understudy and was developed for automated machine translation evaluation. Apparently CS authors no longer bother with expanding acronyms the first time they are used. Paper is here:
EDIT2: Also, for the BLEU comparison they compare the computer generated API sequence to a human-written API sequence. However, they give no details on who or how the human-written sequence is developed. Are the researchers coming up with their own API sequence? Are they using mechanical turk? Interns? There could be significant bias depending on how these human-written sequences are generated.
What does CS stand for? :p
But yeah, I might expect BLEU to be unexpanded in an NLP paper maybe, but not here
Haha. Well played. :) I just think it's good practice in general, no matter how common, to use the full name when the acronym is introduced. Especially if it is a method used in the paper.
Also, BLEU is a fairly widely used metric. I assumed they referenced the paper though?
As a rough analogy, consider how ancient philosophers tried to reason about natural language explainations (as a vehicle to reason about the world). This led to the development of formal languages, especially in mathematics, but also e.g. in laws (both have lots of clearly defined terms in them that try to make up for the ambiguities of natural language).
This is exactly what I like about languages like Haskell, where you can reason realtively easily about code (although it's far from perfect). Or OCaml, where in addition you can reason about the performance (although not perfectly, due to garbage collection etc.). Or Rust, where in addition the compiler helps you to reason clearly about memory usage and aliasing.
This is all far from perfect, but my point is that improving languages (and actually _using_ these good languages!) is as important as writing good code in the first place.