Actually, that's not how tests work. Writing a test after the code has been produced should not be considered a real test. Let me quote Robert C. Martin on this.
If we lose the production code, we end up with a better designed system that stays clean because it has tests. If we lose the tests, then the production code rots and the team slows down in a never ending spiral of lost productivity.
So we can conclude that if it became a choice between the tests or the production code, we'd rather preserve the tests. And this means that the tests are a more important component of the system than the production code is. Because the tests are the specs.
Very nice. I might replace my own personal framework with this - it's certainly cleaner.
A suggestion I'd make. Rather than simply taking a price series (which seems to be based on daily data), it might be useful to build based on the open/close/high/low series. Then allow the trades to happen at a random price between high/low rather than simply the close.
I find this more reflective of the uncertainty in trading since there is no reliable way to actually trade at the close price.
There's a PandasDataGenerator which is a wrapper around the pandas libary's DataReader. For the Yahoo source, you can have the following data options: "Open", "High", "Low", "Close", "Volume", "Adj Close". The YahooCloseData generator actually uses "Adj Close". You can implement high, low or open by copying the YahooCloseData generator and getting the respective key instead of "Adj Close". Feel free to contribute these generators upstream too :)
If you're looking for modeling out the uncertainty, see the slippage section in:
If you have a dataset that provides more frequent than daily data, you can store the sell order on your order generator and process it the next tick. That combined with slippage and commission will probably give you the most accurate trading model.
Let me know if you have any other thoughts on how it can be better modeled.
Please don't use Adjusted Close for backtesting execution. Adjusted prices are not real. They never happened, and using them will add unknown error to your backtest results.
Back adjusted prices are fine for building a model, but when back testing the model and simulating executions, you should always use real prices.
You're right. I really need to bite the bullet and generate stock split data. That was the main reason I used adjusted because splits were skewing my tests a lot.
Why do you say you can't reliably get the closing price? You should be able to send an MOC order to the listing exchange to get the official closing price.
This looks interesting. I'm familiar with the basics of investing, but have never gotten into quantitative analysis. Does anyone know of a good starting point (book, online course, anything)?
I'm adding a tutorial for more complicated strategies to the documentation. What did you have in mind for the visualization? Currently you can just install matplotlib and graph it or export a backtest to json.
To start, free historical data from yahoo. Once you've back tested, try executing manually with your online brokerage (I.e. if it says buy, login to etrade and buy). After that upgrade to IB or Lime (depending on your needs).
https://github.com/Emsu/prophet/