1) It's incredibly difficult to create strategies that work because there are thousands of highly trained, highly paid people working to create strategies.
2) Any strategy that you do create will self-correct over time and become useless, as strategies find examples where the market has misjudged prices; by exploiting the strategy, the market should, over time, stop misjudging the prices.
The "EMH" that the parent is referring to is the Efficient Market Hypothesis, which states that the market price is the "correct" price, and that it is impossible to predict a better price, as the current price incorporates all possible information.
I, personally, don't subscribe to that version of the EMH; I believe the one implied by 1), which is that it's difficult to compete with the thousands of math PhDs working on Wall Street.
You'd think so, and yet ML is transforming investment and trading strategies.
From The Economist:
Castle Ridge Asset Management, a Toronto-based upstart, has achieved
annual average returns of 32% since its founding in 2013. It
uses a sophisticated machine-learning system, like those used
to model evolutionary biology, to make investment decisions.
It is so sensitive, claims the firm’s chief executive, Adrian
de Valois-Franklin, that it picked up 24 acquisitions before
they were even announced.
Something like that can be written, but it wont be very useful. Most of the simple stuff, the non-mathematical parts would get automated away. You don't want to be in a position where you are competing with someone's commodity script (sometimes just a for loop), unless the situation calls for desperate measures.
A bodybuilder got to lift them weights.
One genuine scenario could be that you personally do not do ML but want to evaluate/understand what your hires are doing. Even then, its hard to do avoid the math if you want to do a semi-decent job.
I think this is the only way to really grok backpropagation. The hours of staring at the update formula till your eyes glaze over the subscripts and superscripts and the summations would not give you as good an understanding as implementing a toy neural net with just a single hidden layer. Its actually a whole lot easier than parsing those low-level notation. It can be done better with high level notation but then you would need familiarity with the relevant mathematical abstractions.
But I guess the approach depends on how you best learn, so there is no wrong answer. Just jump in!
I learned nothing from impl my own other than why all NN libs break when you input values > 1.
Any decent NN lib will be way better that whatever you could write in a week full time.
Pick your fav lang, find the most used NN lib and try a kaggle competition.
If you can get to about rank 50% your training is complete.
Not if we understand 'effective' to also mean 'cost & time effective'
You'd learn a lot by building a nuclear reactor from first principles, but it's not the most effective way to develop an intuition about how one operates.
effective and efficient are synonyms in this context
A problem is that ML is not quite as mature as a car yet, so the driving manuals will be a bit on the thinner / shallower side.
> I understand how neural networks work
Quickly write that down please, that would do the world a favor. Researchers are still grappling with the question 'why the hell does this freaking thing work as well as it does, when it does'.
And of course, it's loads and loads of work.
What makes the difference in success for machine learning projects (this is assuming you have some process to avoid screwups in place):
1) quality of how the data is input into the network. You can almost never "usefully" put raw data in front of a neural network. Call it "data represantation" (for instance, for comparing stock prices do a neural net DNN predictor on the prices. Now do the same on the deltas (today - yesterday). Works 1000x better (still not good enough, but very clear difference))
It may not compare to the feature engineering of SVMs, but it's very present (it kinda does imho, but people tend to get very defensive when suggesting that)
2) your cost/loss function. There are tricks like GANs which are simple in concept and avoid some of the issues, but even there you have your image comparator you're still going to need. You can massively improve image comparisons (e.g. mean square difference of the image scaled 4x4, times 1e12, plus same for 16x16 times 1e6, plus mean square difference at real res beats the crap out of just taking mean square difference. Very good results have also been obtained for MNIST by comparing sorted lists of black pixels, instead of actual images)
Many things depend on your cost/loss function, and you especially have the eternal problem : I want it to improve in 10 different ways. How do I balance those well into a single number ? Robot should grab the teddy bear, and shouldn't hit anything else. Those are easy to balance, but how would you balance grabbing the teddy bear at all versus not damaging it ? It will make a huge difference in whether your neural net converges at all.
The programming assignments also give you that nice practical experience: you get training sets to code and run ML algorithms against, using a language (Octave) that has a nice mix of low/high level-ness.
I strongly recommend it.
Can't recommend this enough.
1) Brandon Rohrer, now Data Scientist at Facebook, has a few great talks, including one on Bayes' Theorem/Bayesian Inference - https://m.youtube.com/playlist?list=PLVZqlMpoM6kbaeySxhdtgQP...
2) When asking future data scientists what tutorials for ML/NNs they like, they have usually found http://machinelearningmastery.com/ through Google and swear by it.
3) Josh Gordon, Developer Advocate at Google, has some simple ML/DL videos up in a 'Recipes' playlist: https://m.youtube.com/playlist?list=PLOU2XLYxmsIIuiBfYad6rFY...
If you want to just step through other people's code, you can do that too. Disclaimer: I put the below list together and it's not for ML broadly but for DL. That said if you want to run some examples fast and see the output, a number of folks have made that work for you -
I for one was floored to find great iOS examples (admittedly now deprecated for iOS 11). But If you have an iPhone with Metal (5s and up) Matthijs Hollemans - who wrote the iOS Apprentice at Ray Wenderlich - has Inception, YOLO, and MobileNets pre-trained and ready to go using Xcode, and it's fun to watch them work on your phone -
To understand ML you've got to have at least a basic understanding of the math. And it's really not that difficult, especially if you find the right class/book/professor/etc.
The problem is that there are a ton of terrible writers and instructors out there.
I think it's just as important to ignore the terrible stuff (pretty much any blog post on ML) as it is to learn from the good stuff (e.g. Ng's ML course).
I ask because I like and plan to use Raleway on my blog and would hate to have readability issues.
I hear claims it has been fixed, but I still see the issue in Chromium 58 on Linux.
Here's one bug which seems to have been tracking the issue with no resolution as of yet:
You can see in the final comment from November of 2015 that the issue was, and from my experience still is, massively affecting Raleway:
EDIT: I just checked and this issue no longer affects me in Chrome 59 on Windows. Although I still have the issue on Linux.