For me, the largest omission is the lack of reference to theoretical limit of Machine Learning. That is, what can't be achieved even if you assume infinite resources and algorithmic complexity. It's important for me as this paper appears to be a damn good stab at being a comprehensive review of why machine learning projects fail, except for missing this critical point. The idea is best explored in the book What Computers Can't Do (H.Dreyfus, 1972), recounted in the book What Computer's Still Can't Do (H.Dreyfus, 1992), and well summarized in A History of First Step Fallacies (H.Dreyfus, 2012) 
Finally, any paper that's freely distributed, can be enjoyed over lunch and includes the phrase "most of the volume of a high-dimensional orange is in the skin, not the pulp" is fine in my book.
 - http://link.springer.com/article/10.1007%2Fs11023-012-9276-0 [PDF]
Feature engineering is more difficult because it's domain-specific, while learners can be largely general-purpose ... one of the holy grails of machine learning is to automate more and more of the feature engineering process.
This is the goal of deep learning, and more generally, representation learning: automatic discovery of explanatory features from large amounts of data. I'm surprised it wasn't mentioned.
You still need to transform your context into a vector of boolean or real values, somehow. And that transform is going to encode assumptions about what information is relevant to the problem, and what's not.
Let's say you're trying to predict house prices. There's no end of geo-tagged data you might pull in. And if you have a cleverer idea than the next guy, your model will be more accurate. And, probably, if the next guy's at least competent, it'll be your feature ideas that set you apart.
In a linear model, you need to come up with a clever set of conjunction features, that balances bias and variance. You don't need to do that for a deep learning model, and that's a big advantage. But that's not the same as saying there's no feature engineering.
Once we get to that point, the "black box machine learning as a service" that many startups seem to be selling nowadays will be replacing data scientists.