Not only is it (very!) easy to overfit backtests (especially with so little data they are using here), but backtests are nothing like the real world. In the real world there are HFT traders front-running you, latency, jitter, fees, hidden order types, slippage, and a lot of other complexities that don't fit into a short HN post. Whenever you see a paper ending with a backtest you can already assume it's BS.
It's similar to training a robot in an extremely simplified 2D simulation environment without physics or other interactions, and then claiming one has built a real robot. A mistake many people make is believing that trading is all about AI. But in reality, the model often matters less than infrastructure/latency/system/data issues.
In addition to that, people who are actually "good" at trading don't publish papers, they silently make money. Papers are typically published by academics or students who have never built anything profitable but would like to put a paper on their resume. I have yet to see a single good academic paper about trading.
Example of a pretty interesting and accessible one - is "101 Formulaic Alphas" .
 - https://arxiv.org/pdf/1601.00991.pdf
Alpha#33: rank((-1 * ((1 - (open / close))^1)))
rank(open/close - 1)
Martingale is a sure-fire way to lose all your money rather quickly.
> Alpha#90: ((rank((close - ts_max(close, 4.66719)))^Ts_Rank(correlation(IndNeutralize(adv40,
IndClass.subindustry), low, 5.38375), 3.21856)) * -1)
I wonder how these magic numbers get picked (4.66719, 5.38375, etc) -- I guess there is some optimization solver which attempts to find the most profitable variables for a given alpha formulation, but isn't this approach also very vulnerable to overfit?
These alphas will likely be only profitable for a short time period as long as the market data distribution (i.e. strategies of other market participants) doesn't change. So you would need to continually optimize and update them.
The way I think about it is that you are essentially finding the right parameters to "exploit" the combination of algorithms of all other participants, where algorithm could also be a human looking at charts and following certain rules, with a lot of random noise from retail traders thrown in.
> (sign(delta(volume, 1)) * (-1 * delta(close, 1)))
That's crazy. Would be interesting to see WTF a "mega-alpha" actually does using these strategies.
Well, that is mostly true. But never discount anything. There are people like me who used to love the data analysis and prediction part in these markets. I got hooked to the markets because of it. I was not interested in making money and naively thought my average pay was good enough. When I first built (or my machine built) a working strategy (in early 2008), live traded/tested it for a couple of months and told few colleagues about the details about the strategy - they did not take me seriously. This was even before I understood NNs or any of scikit-learn tooling. I knew I wanted to get into financial markets - went to a broker to sell the automated strategy and seeking a full time job as an algo-trader - they thought I was trying to scam them even after seeing the contract notes. Plus algotrading had not picked up back then. I found later about such scams. It took me 3 more years and a financial crisis to understand the value of making "much more than enough" money. And retrospectively I know those were just stupid attempts trying to convince others and attempting to give it away.
You're right that there are probably some gems and people writing up good posts and articles. However, 99% of what comes to my inbox, which is certain newsletters and arXiv subscriptions, is clearly BS. I'm particularly disappointed with arXiv/academia, because in other fields like biology and CS/ML/AI, published papers tend to be of higher quality than your average blog post. In trading the opposite seems to be true. Seeing a good trading paper on arXiv is incredibly rare. I would even go as far as saying that reddit is a significantly better source of information than arXiv for this field.
It's expensive but I find it a really good source for ideas.
In the case of trading, any paper not tackling these issues head on is not likely to be useful.
As you go to shorter time scales you get more usable data, but then you also need to deal with other issues such as latencies/jitter, market impact, complex order types, order book queues, etc. It becomes a different game.
You should really google up something called the Gell-Mann amnesia effect. 99.9% of everything is shit. Including biology, CS, ML and especially "AI."
Of course trading papers are even more universally shit, but once in a while someone publishes a non obvious to me risk factor.
But like you said, the range here is incredibly wide and largely depends on how well your strategies do and if you have your own desk/fund.
Bonus distribution was reverse exponential. Like traditional consultancy partnerships, a small few of the old hands made serious money but those at the “bottom” made ok money but after a few years their FAANG based contemporaries were doing better. Advancing up the ranks was not guaranteed even if you survived the frequent blood lettings.
It seems to me like those tragic stories of genuises who died young. What could have been if their ideas had reached the world? But instead of dying the geniuses got sequestered into finance and secrecy, volunteering to make no mark at all on the world of their passing.
If they haven't tested this in actual trades and measured results, it's probably worthless. Even backtested strategies at actual firms observe decays (or don't work) when they get put live. And those are places where they invest in (and are incentivized to get right!) backtesting methodology.
I think this is a little unfair. I've seen high-quality papers from phD students who then get hired by financial firms and were apparently very successful. Every good real-world AI system requires both good engineering and good science and it's disingenuous to suggest that all science that isn't actively being applied yet is BS.
As a side note, what this specific paper here did is neither novel not innovative, so it's very fair to criticize it. A3C is 4 years old, and they just take it and run it on some data. It's like downloading a convnet and running it on MNIST. There have been hundreds of papers on RL + Trading. I see them in my arXiv emails every other day and they all do the same thing.
> This paper, just like pretty much any academic paper on the subject, ends with a backtest on historical data, not a real system
> Whenever you see a paper ending with a backtest you can already assume it's BS.
Edit down voters care to elaborate?
I know this, and I ran a company where people should know this, but so many people are so easily swayed by "authority"
like, so and so made trading programs for Investment Bank Co 20 years ago so you know their trading algorithm has to have merit
uh no, they are not retired, they are broke and can't even fund $10k into a trading account to try it
at this point all I would say is just smile and nod.
Unfortunately it seem like most ML people are not really interested in trading, perhaps because it has such a bad reputation (which is IMO unjustified) - so they work on games instead :)
You couldn't be more wrong on this. Stock market trading has the lowest barrier to entry of any endeavor. All you need is $1000 and Robinhood account, which you can open one on your phone in 5 min or less.
I've been following HN for a while, every time someone comes up with a trading algo or posts a link to algo, there's were hundreds of upvotes, lots of comments.
Quandl has many free, or low cost stock market/commodity datasets.
I'm not sure what you mean by a "simulator". One of the greatest challenge applying RL to stock mkt is precisely that the market itself is not a MDP.
There is a good reason trading firms pay a lot of money (sometimes millions) for fine-grained historical data from exchanges. It's not only about speed. For interesting experiments you IMO need L2 or L3 order book data, ideally somewhere on second or sub-second scales. That's not HFT (which is nano and micros), but somewhere in the "middle" - it's a different world than what you are talking about.
By simulators he means market simulators for L2/L3 data with a matching engine, latencies, queue positions, jitter, complex order types, etc. You can't simulate other market participants (at least not fully, but there are techniques to even estimate this based on live trading feedback), but there are still many things left that you can simulate in a realistic way during training and backtesting. Trading companies typically have their own high-performance simulators built in house. Some of these are incredibly complex. Good simulators can give you a huge edge and are absolutely necessary.
"outliers and events outside of the data, news" : these are precisely the stuff your models need to learn, and the fact that you consider them noise tells me most folks have no clue how to predict these "noise".
I've long understood that this was true. It makes intuitive sense.
But are there any cases where it is not true?
Is it possible to "spread the wealth" when it comes to trading, or any money-making endeavor?
Or does it always reduce down to "I win only because you lose"?
Then there is the cultural aspect. People who are working in trading are just not used to sharing openly. They don't write online, or anywhere. They are not even allowed to write due to their employers. And people who work in academia are naturally not working on "production" systems - their only job is to write, not to build. So you almost never see people in the intersection of: writes-online & understands-trading & is-not-in-academia
But that's an exception rather than a rule
The real value though of publishing a trading strategy is in signaling to future employers.
Ultimately, money is made by the ability to come up with new strategies. Any single strategy is only going to live for so long before it dies and it is no longer profitable.
Opening up one's secrets of trading seems to only make sense if one has found deeper, more effective secrets, so that the old crop is not going to be seriously competitive, but a bit of good PR would come in handy.
| | Play | Don't Play |
| Play | -5 | +10 |
| Don't Play | -10 | 0 |
> In game theory and economic theory, a zero-sum game is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants.
 You can change the +10 to +9 if you want to make it the absolute highest total payout.
For example, a gold miner may sell gold futures to guarantee that he won't go out of business once the construction of the new gold mine is complete. There are many other examples.
I find that in reality opportunity cost rarely matters.
It has absolutely nothing to do with opportunity cost, or whether you, personally, are a "loser". But if you're a "winner", someone else must be the "loser".
Let's say I am a market maker offering to buy Apple shares at $99 and sell them at $100. Let's take an ex-Apple employee who owns some shares. He just had a family emergency and wants to liquidate his shares to get cash, and he needs it quickly. He doesn't care about paying a few dollars extra in exchange for a quick trade because he needs to pay a bill tomorrow. I buy his shares for $99. He is happy because he immediately got his cash.
On the other side, there is a a retail investor doing long-term investment and wants to add Apple to their portfolio. They also don't care about a few cents because they're holding the stock for a decade and love the new CEO. They buy my Apple shares from me for 100.0. They are happy because I can guarantee them a stable price for a decent number of shares.
All participants are happy. I just made $1 from the spread for providing liquidity, the investor got the long-term investment they wanted, and the ex-Apple employee got his cash.
Sure, both sides of the market could have made more optimal trades if they had put in more effort and "optimized" their trades with algos and somehow skipped the middle-man, but they would've sacrificed convenience and time, which may be worth more to them than the little bit of extra $ they paid. Aren't we all winners?
When you go buy bananas in your grocery store you also don't complain about them taking a cut for providing liquidity. You don't say the farmer has "lost" money because the consumer paid more than what the farmer originally sold for to the grocery store. The farmer is happy because otherwise he may not have traded at all or his bananas may have gone bad (= needs to trade quickly). This is no different.
In addition, companies produce things, some of that wealth gets returned to the investors through dividends, interest (eg on bonds) and buy backs (in my example, let's say each unit generates $1 in dividend, now total wealth is $40(!) while starting at $20, including $20 cash (starting from $10) and $20 worth of units (starting from $10)).
In fact, we see this growth everywhere around us as both the amount of people and the amount of goods and services per person is increasing!
It is a zero sum game. Nobody is producing anything, therefore for one to win another must lose.
Many HFT jump out when things get volatile, when liquidity is actually required.
Ultimately HFT is doing nothing of societal value, the race down to zero is never-ending and we are wasting huge amounts of resources on a totally pointless march towards zero. Exchanges should introduce random delays to allow market participants who really want to hedge / buy / sell, then we can shift some of the resources to the real world. The costs required to compete at the lowest latencies are large, and forcing small/medium players out the game, as the investment cost is large, which is also bad.
The system is hugely inefficient. The costs as latencies get lower are ever higher, for an extremely similar end result. The law of diminishing returns.
> Many HFT jump out when things get volatile, when liquidity is actually required.
Do you have a citation on that? If you look the preliminary Q1 results of Virtu Financial  (only publicly traded HFT) they seem to be doing more trading than ever in these volatile markets.
> Ultimately HFT is doing nothing of societal value, the race down to zero is never-ending and we are wasting huge amounts of resources on a totally pointless march towards zero.
HFT is a mature industry. Latencies have mostly stabilized, and profitability is way down in the last few years. Many firms are merging/consolidating. So in the past few years society is actually spending fewer resources - both financially and from a human capital standpoint on HFT than it did in the past.
> Exchanges should introduce random delays to allow market participants who really want to hedge / buy / sell, then we can shift some of the resources to the real world.
IEX is doing something relatively similar to that for a few years now. They have ~3% of US equities market share. People have the option of trading there but they mostly choose not to.
> The system is hugely inefficient. The costs as latencies get lower are ever higher, for an extremely similar end result. The law of diminishing returns.
Due to consolidation, costs are actually decreasing. Could it be that the market is... working?
 - https://ir.virtu.com/press-releases/press-release-details/20...
Similar story from Flow Traders:
So what seemed like a quick drive by wasn't actually correct.
This feels almost like a "no true Scotsman" situation. Why is liquidity not "actually required" when volatility is low? Is it a moral obligation for any trader to catch a falling knife? I see this condition of "when liquidity is actually required", but I never understood why there was such a strong feeling for it. Why do you believe this?
I don't know, I could probably take a similar view of so many jobs in tech. What does society really get from Snapchat, what do they get from HQ Trivia, what do they get from people making powerpoint presentations with arrows that point to synergies. What's the point of any job with some amount of abstraction?
> The system is hugely inefficient
Do you know how efficient the system was before HFT started up? And, do you know how many people were working in trading before, and how many are, for a similar fraction of stock volume?
> The law of diminishing returns.
Again this weirdly mixes HFT with electronic automated trading, which I really don't think anyone in the domain would readily mix.
HFT by arbing over latency is entirely different to the automation of boring trader tasks that see less people employed to do the same thing in the front office.
I can't continue this more, it's just blind allegiance from people who are clearly not in the domain.
HFT != electronic trading
HFT is also not equivalent to arbing over latency.
For example, an HFT trader make pennies from each trade by exploiting tiny price inefficiencies. He essentially takes money from a "stupid" retail investor who does not know how to optimize his trades. However, the retail investor may not actually care about optimizing trades and just wants to liquidate assets or make a long-term (10+ years) bet. He is totally fine with throwing away a few dollars because optimizing his trades through complex algorithms would be too much work. Here, both parties win, the HFT trades gets paid because he provides convenience, or liquidity, to the retail trader. The same would apply to any human market maker, it doesn't have to be HFT.
And yes, HFT liquidity may disappear during HUGE market movements due to risk, but it doesn't disappear as long as both parties get what they want and the risk is manageable, which is "most of the time". Of course, HFT has other issues such as the race to zero and unfair advantages for a few central players, and I don't want to defend HFT. But saying that "it's all zero sum" is not correct.
An analogy is your nearest grocery store. They're a market maker because they buy from the manufacturer and sell to the consumer and profit from the spread. Do you also argue that these are all zero-sum and we should cut them all out and connect all consumers and farmers directly? And their liquidity also disappears when black swans (corona) happens :)
Melon Usk (say) wants to make cars, but he can’t pay for the factory himself, so he forms a company, sells shares in it, and uses the proceeds to build a factory. Now he and his shareholders can make cars, so the shares are worth more.
Who lost money?
Now, since you appear to know about these things, among all the available papers/article/blogposts/books is there any that you would recommend as being less wrong than the rest? For example, a while ago I read this book , and it didn't seem so bad, but I'm not in the industry. Can you recommend anything, even with caveats?
 is okay. I disagree with a lot in there, but it's pretty well written and one of the better books on the subject.  Is very old, but it's one of my favorites. It's very mathematical. The ideas still apply today.  Is a good introduction overview
As a student who is looking to get started with trading and enjoys the mathematical/analysis part of it, do you have advice of where to begin? I find very few resources in this area and its very hard to get on this career path - my experience is on the ML side if things and I want to transition into trading. Any advice will be really helpful - thanks!
There are a few different types of roles in the quant world, and number of different types of funds:
alph/signal research: apply quantitative methods to come up with profitable trading ideas and strategies. This is kind of like "Data Science" coming from tech - finding the insight in the data
quant development: build the infrastructure for the data and strategies. This is kind of like "Data Engineering" coming from tech - a lot of ETL and general development work.
portfolio analytics/execution: figure out how to combine different alpha ideals into a portfolio that can be traded. Involves trading and monitoring the live portfolios.
risk management: Thinks of all the possible "risks" the portfolio can be exposed to and ensure they're properly addressed/hedged/accounted for.
This is a broad generalization which can vary greatly from place to place. Typically the smaller funds will have more blurred lines and lots of roles that involve doing multiple of the listed above. At the larger funds, the roles will typically be more well defined and segmented.
Lot's of quant funds are happy to hire people with no finance/trading background if they're strong enough in other key areas. A lot of the "finance" specific stuff can be picked up on the job. Also ML is quite in demand right now.
To front run someone’s order you need to have advance information of their order s? Normally this means the front runner is operating as a broker. I can’t imagine any HFTs using other HFTs as brokers to forward their orders to the exchange?
I’m sure that there is market microstructure stuff/front running practicalities that would make this harder than it sounds but still you wouldn’t completely be in the dark.
Front running has a specific definition in terms of market regulation (see my other comment) and what you describe is not front-running.
Front-running is when someone with a fiduciary duty - typically a broker or dealer - takes an order from a client and then trades on their own book BEFORE executing the client's order knowing the effect of the clients order on the market and knowing that they can exploit this effect for their own benefit.
I know of no HFTs which have such a relationship with rival HFTs and can't even imagine such a relationship existing never mind it being a frequent cause of why strategies perform poorly for HFTs.
Front-running hasn't been a feature of markets for decades at least. Any sniff of front-running would have the SEC or CFTC fine your company into oblivion and possibly result in jail time or at the very least lifetime bans from the financial industry.
Being faster than someone else isn't "front running" them nor is spotting patterns in other participant's behaviour and exploiting those patterns. The definition of "front running" is reasonably specific: https://en.wikipedia.org/wiki/Front_running
- Trading isn't just about deciding what to buy and sell, the sexy part that everyone thinks is great. I even had colleagues who thought they were special because they worked closer to the strategies, which meant that certain less glamourous parts were neglected.
- Less glamourous parts like coding the software to read in the market data and send out orders.
- Less glamourous parts like schmoozing with brokers to get them to lower your costs.
- And maintaining infrastructure, which somehow people think should come as part of coding.
Now I'm not saying that RL won't help you. It's just that focusing on the "intelligent" part of the trading system tends to lead to disappointment, as you discover some unknown restrictions on your model that you hadn't thought of. Things like when you find out short selling was prohibited during the period that your model backtest was shorting.
My main red flags when reading papers are:
- Choosing a dataset from a small market. Basically any market that isn't the US or Western Europe large caps. You'll discover both price impact and high fees quite late in the game.
- Choosing a very small subset of the market. Smaller n, more noise and overfitting.
- Short periods. N again.
- Long intervals between decision making. N again again.
That's not to say there's nothing useful to be read though. You might be inspired by something you come across.
The small subsets and super high Sharpe ratios look suspicious.
Further red flags in this particular case:
- Completely unclear what kind of data they're using. Are they assuming they can buy and sell one individual contract at the bid price each minute? Or did I miss some crucial information about bid-ask spreads?
- Abstract mentions a profit, not an information ratio/Sharpe ratio or anything similar.
- During training they need to tweak the reward function in order to not end up with "buy and hold"? How good is their strategy compared to buy and hold?
- Plots without proper labels.
I wrote a paper that demonstrated that A3C is as exploitable as a uniform random strategy in board games (specifically, some poker variants): https://arxiv.org/abs/2004.09677
(Exploitable is a technical term that is defined in the paper; basically, it's "how much can someone who knows everything about your strategy beat you by?")
So I would be very surprised if this survives contact with other traders.
Interesting! What are the main papers in this area?
Any intuition why this is the case? is it because A2C generally results in brittle policies?
The alphastar paper and blog post do a good job discussing these issues as they had similar problems. I’d say that’s a great starting point (and then following their references).
Depending on how liquid the market they studied is, code that assumes there is never any slippage may not be very realistic.
There's no comparison to a simple buy-and-hold strategy, which may be less interesting from a computer science perspective but is a good way to avoid spending lots of money on transaction costs.
(I once posted my own algorithmic trading project to HN that had very flawed, naive assumptions about what trades could be executed.)
Edit*: It looks like the comms are indeed around 10RUB per side (which is approx 1bps). https://www.moex.com/en/contract.aspx
If the model is trading single lots then this is a reasonable cost assumption, otherwise it isn't. The paper using 2.5RUB as costs is unreasonable.
Try generating a time series in Excel with Brownian noise, watch as it is indistinguishable from price charts.
Since you seem to be industry practitioners: I moved away from RL 10 yrs back disillusioned with lack of real-world applicability. Has that changed significantly? The only major name I’ve heard of is Vowpal Wabbit. Maybe there are more applications being done in stealth. Any insight? Thanks
However, I don’t think the current limited use of RL is a permanent situation just that the most exciting uses of RL are extremely difficult problems that involve long-time horizons and planning. For example, RL could be used to automatically prove mathematical theorems which would be amazing. But it’s a really hard problem for various reasons. Still a lot of progress to be made.
Looks like these two places are on the cusp of a major breakthrough in RL/robotics!
one of the hard problems was labeling the data. knowing that the price is going up 10 bps one minute from now, should i buy? maybe. but what if it's going to crash 100 bps right after this? probably should sell instead.
reinforcement learning promises to eliminate the need to assign labels in the training data. the agent will try a bunch of different variants at random and eventually will choose the most optimal one knowing the state of the working, i.e. the state of the markets. at training time i only need to feed it the features data. another benefit is that backtesting and model training is sort of fused into a single process. rl model is optimizing pnl, and not the label classification score (as in the nn model). with proper train-test-validation split, the most performant rl model can go straight into production (helping me to keep some of my hair brown)
while all the bits and pieces seem straightforward i never managed to tune rl model to work better in the backtest compared to the good old old nn models. maybe i have never been closer to the gold vein, but for now, i abandoned my efforts to build a performant rl agent if favor of nn models.
To answer your original question about overfitting, they can still overfit to test data by running a lot of experiments with different hyperparameters, architectures and parts of the data, and only report what has worked. There are also more complex ways that test data can leak into training data (see the book Advances in financial ML for a good overview). You can already see this is likely the case just from the variance in their results and trades. They also don't compare to baselines. It's not unlikely that the results are just random and they fail to report those experiments that didn't work. Of course, you cannot prove this without having an exact log of all things they ever did to the data. But again, that's not the main issue here.
This is certainly worth criticising
> they can still overfit to test data by running a lot of experiments with different hyperparameters, architectures and parts of the data, and only report what has worked.
but this is a different accusation from accidentally overfitting or leaking, i.e. it would mean that they're dishonest and cherrypicked their data in such a way that it hides overfitting and leakage. This criticism can be levelled at every ML paper, but in this case they detail their architecture, provide the code, and provide a Jupyter notebook to let people try it themselves.
> just assume they can trade at whatever price the data tells them. It's completely unrealistic.
I think that this is a fair assumption for highly liquid markets and relatively small trades, and if it's a fair assumption then all of your criticisms (slippage etc) don't apply to the extent that they'll break the approach. Also, if the approach works then trade size (fees aside) and being frontrun also wont apply because presumably large HFT firms can use it.
Overall I think your criticisms are valid, but imo they don't invalidate a promising approach, they're just the next thing to test.
Like sports gambling, a lot of the financial products we trade are obviously built by humans using rules, and arbitraging the intrinsic rules and regulations around said products. Think about Forex trading where you convert currency into currency. One of the key strategies is to find and identify brief negative cycles, for example, in the hope that converting US Dollars to Euros to Yen back to US Dollars leaves you with more dollars than you started out with.