Hacker News new | past | comments | ask | show | jobs | submit login

can recurrent neural networks be successfully applied to stock market?

I don't seem to see a single paper in the finance area.




To your first question: yes, for some definitions of "successfully"[1][2][3][4]. It's difficult, and it isn't as though you train your algorithm and watch it make money. It's more that some functions of data analysis become incrementally easier or superior. It's fundamentally similar to applying RNNs to any other signal processing problem.

To your (implied) second question: tech companies like Google, Facebook and Microsoft have a strong incentive to publish their researchers' work. It's good for recruiting and pushing progress forward overall. Financial companies have a strong incentive not to do this, in fact they are incentivized to be secretive. In general this means the only quant finance papers you see are published by academics, not quants. This typically results in fewer papers with stunning results from the financial industry, whereas we're almost getting progress fatigue from the papers published by tech companies (where "stunning" means impressive and maybe immediately useful).

[1]: https://arxiv.org/pdf/cond-mat/0304469.pdf

[2]: http://cs229.stanford.edu/proj2012/BernalFokPidaparthi-Finan...

[3]: https://clgiles.ist.psu.edu/papers/MLJ-finance.pdf

[4]: http://www.kolegija.lt/dokumentai_img/Maknickiene-3-8-Pages-...


Thank you!


Finance differs fundamentally from most other machine learning domains because new data diverges qualitatively from trained data so much quicker. Roads aren't reconfiguring themselves daily to to avoid being detected by a self-driving car; tumors aren't really appearing any different on mammograms than they were yesterday.

So whereas in most domains, a large, growing and representative corpus of data should allow a wide range of ML approaches to steadily decrease their error, in finance the error will generally increase over time for any particular classifier approach, unless it is a sufficiently dynamic meta-approach that produces new classifiers in an ongoing way as new data becomes available. (And even then, the new data is only marginally better than old data at predicting the nature of the market tomorrow -- it's still way more unpredictable than traffic.)

How do you produce a sufficiently dynamic meta-approach, to produce new classifiers on an ongoing basis that are actually good? You need to discount the value of large amounts of data, because that data is fundamentally less predictive than data in other domains; and you need to strenuously avoid fitting to irrelevant signals in the data. This is harder than just not overfitting in a typical way; yes, you need to not allow too many independent values in your model relative to the amount of data. But you also need to attentively inspect and guide the consideration of the signals that your model tries to use, even if there aren't too many of them.

Here's where neural networks, and genetic algorithms, are especially difficult to use in finance: they are more opaque to human inspection and selectivity, and so they lower the ability to discover elements of a model which are of suspect value. A less sophisticated approach like various regressions and decision trees, on the other hand, are easier to inspect and thus allow for more careful human-computer cooperative iterative design.

In a domain where the data aren't trying to squirm out of your hands, you can let testing error show you how to iterate in your meta-approach to producing classifiers; your classifiers will get better. But in finance, that iterative process may not improve your classifier production method fast enough to keep pace with the changing data, and so you need a process that requires less practical failure, and instead uses more human understanding and inspection, to guide iteration.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: