Hacker News new | past | comments | ask | show | jobs | submit login

Though others didn't, you might find this interesting: "Ask HN: Why would anyone share trading algorithms and compare by performance?" https://news.ycombinator.com/item?id=15802785 ( https://westurner.github.io/hnlog/#story-15802785 )



I think there is value in a back-testing module, however, sharing an algo doesn't make sense to me, until unless someone wants to buy mine for an absurd amount.


I think part of the value of sharing knowledge and algorithmic implementations comes from getting feedback from other experts; like peer review and open science and teaching.

Case in point: the first algorithm on this list [1] of community contributed algorithms that were migrated to their new platform is "minimum variance w/ constraint" [2]. Said algorithm showed returns of over 200% as compared with 77% returns from the SPY S&P 500 ETF over the same period, ceteris paribus. In the 69 replies, there are modifications by community members and the original author that exceed 300%.

Working together on open algorithms has positive returns that may exceed advantages of closed algorithmic development without peer review.

[1] https://www.quantopian.com/posts/community-algorithms-migrat...

[2] https://www.quantopian.com/posts/56b6021b3f3b36b519000924


How well does it do in production though and what happens when multiple algos execute the same trades? Does it cause the rest of the algos to adapt and change results? It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it. I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.


> How well does it do in production though and what happens when multiple algos execute the same trades?

Price inflation.

> Does it cause the rest of the algos to adapt and change results?

Trading index ETFs? IDK

> It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it.

Why does it need to do lots of trades? Is it possible for anyone other than e.g. SEC to review trades by buyer or seller?

> I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.

pyfolio does tear sheets with Zipline algos: pyfolio/examples/zipline_algo_example.ipynb https://nbviewer.jupyter.org/github/quantopian/pyfolio/blob/...

alphalens does performance analysis of predictive factors: alphalens/examples/pyfolio_integration.ipynb https://nbviewer.jupyter.org/github/quantopian/alphalens/blo...

awesome-quant lists a bunch of other tools for algos and superalgos: https://github.com/wilsonfreitas/awesome-quant

What's a good platform for paper trading (with e.g. zipline or moonshot algorithms)?


I disagree with price inflation just because everything is hedged, but it may be true.

The too many trades is if there are 300 algos, and I look in the order book and see different orders from different exchanges at the same price point, then I would be adapting to see what's happening, not myself, but there are people who watch order flows.

I don't paper trade, either it works in production with real money or not. Have to get a feel for spreads, commissions, and so on.

Also, in my case, I am hesitant to even use paid services as someone can be watching it, so most my tools are made by me. Good luck with your trading though, if it works out, let me know, I'd pay to use it along side my other trades.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: