Case in point: the first algorithm on this list  of community contributed algorithms that were migrated to their new platform is "minimum variance w/ constraint" . Said algorithm showed returns of over 200% as compared with 77% returns from the SPY S&P 500 ETF over the same period, ceteris paribus. In the 69 replies, there are modifications by community members and the original author that exceed 300%.
Working together on open algorithms has positive returns that may exceed advantages of closed algorithmic development without peer review.
> Does it cause the rest of the algos to adapt and change results?
Trading index ETFs? IDK
> It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it.
Why does it need to do lots of trades? Is it possible for anyone other than e.g. SEC to review trades by buyer or seller?
> I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.
pyfolio does tear sheets with Zipline algos: pyfolio/examples/zipline_algo_example.ipynb
alphalens does performance analysis of predictive factors:
awesome-quant lists a bunch of other tools for algos and superalgos:
What's a good platform for paper trading (with e.g. zipline or moonshot algorithms)?
The too many trades is if there are 300 algos, and I look in the order book and see different orders from different exchanges at the same price point, then I would be adapting to see what's happening, not myself, but there are people who watch order flows.
I don't paper trade, either it works in production with real money or not. Have to get a feel for spreads, commissions, and so on.
Also, in my case, I am hesitant to even use paid services as someone can be watching it, so most my tools are made by me. Good luck with your trading though, if it works out, let me know, I'd pay to use it along side my other trades.