Although I think the SEC has generally done a good job in terms of market microstructure (especially in terms of resisting ill-informed populist rhetoric), I really wish they'd start exploring sub-penny tick sizes, at the very least on the most liquid low-priced stocks.
Since decimalization, it's been two decades since we've had a tick size reduction in US equities. When certain stocks sit at a one penny bid-ask spread all day, that's a sign that penny ticks are too economically wide. It's consumers and ordinary investors that pay the cost in terms of higher spreads.
You can think of the bid-ask spread as the cost of liquidity. It's how much more you pay for immediate execution. The tick size therefore acts like a price floor on the cost of liquidity. Why are we imposing a price floor, one that's much higher than the free market price, on end-consumers? If we reduced tick sizes from $0.01 to $0.001, the cost of trading for most retail investors on most stocks would fall by 50% or more.
It blunts the price discovery process. It incentives HFTs to compete in an arms race of speed to capture queue position, rather than provide better prices. The only group it really benefits are the incumbent exchanges (who can avoid competing with new entrants offering better pricing), and large, active portfolios, like hedge funds, that incur high market impact costs.
The only way other way to get priority is by bettering your price. If minimum tick size is set to a very low threshold (or no threshold at all), "outbidding" someone for a better place by offering a economically insignificant price improvement (say $0.000000001 per share) would become the easiest way to get price-time priority on someone, and hence the value of placing a standing limit order would be reduced: Anyone can just outbid you for a fraction of a fraction of a penny at any time, so it becomes much more likely that if your limit order gets filled, the price will then move against you, and otherwise, your limit order will never get filled. This problem exists today but would probably be made worse by a smaller tick size - in fact I believe it's one reason why liquidity rebates are offered.
Hide not slides were used when you didn’t want to move the price at all. If you could move the price a little bit you’d get many of the benefits of a hns without the negative externalities.
Curiously, the commodities markets have always adjusted tick sizes for different contracts so that the dollar value is reasonably consistent (~10 dollars). These are products for which the contract size (in dollars) varies wildly and the quotes are all over the place (anywhere from 9 decimal points to no decimal points).
It makes writing your algos confusing, esp when you start doing inter-commodity spreads, but the business case is so obvious, it really makes you wonder why the equities markets haven't done more.
If you round it, you create a risk free arbitrage opportunity.
The price quoted on the market is the amount settled for one share. If it is not, then it is not really the price of one share.
At what point do we get a diminishing return? At what point is it so small that it blows up the algorithms?
Traders can compete on
- queue position
Small ticks mean they compete on price and big ticks mean they compete on queue position - to trade they need to have an order on the book, offering to trade at the current tick, that is ahead of other orders.
It's easy to see how the market benefits if people compete on price. However, it also benefits if people show how much they are willing to buy and sell. No sophisticated trader wants to reveal that as they will be taken advantage of when they are wrong. By having bigger tick sizes you incentivise people to try to get into the queue at these artificially better prices - it pulls liquidity into the open.
The tick size pilot that concluded in 2019 shows how this balance isn't easy to strike https://www.finra.org/rules-guidance/key-topics/tick-size-pi...
Lowering tick size helps price discovery, but hurts open liquidity.
Is there some ideal constant average number of ticks between the bid and ask which optimizes this tradeoff given a desired state of liquidity and price discovery?
Plus you can trade at sub penny all day long. You just can't quote. So I don't think that hinders price discovery one bit, even on SPY trading at $350
Can you source this:
> If we reduced tick sizes from $0.01 to $0.001, the cost of trading for most retail investors on most stocks would fall by 50% or more.
If that's true, it would seem to imply that trading costs are so low that it doesn't matter.
Amusingly, there are folks who take the other side of this argument, and very drastically like Scott Kupor at Andreessen Horowitz 
> How did this happen? Because decimalization reduced the “tick size,” the minimum increment in which stock prices can trade, to a penny (from its previous level of 25 cents). Thus, a trader who previously might have purchased a block of small-cap shares knowing that a $0.25 tick size likely represented his minimum profit potential on a trade now found his minimum profit potential reduced to a penny. Facing this uneconomic situation, small-cap traders simply abandoned the market, killing liquidity for these stocks.
More recently, that was testimony to the SEC . It’s certainly true that small-cap / illiquid stocks are illiquid. But it seems like the argument is “we should intentionally have market inefficiency for market makers to collect so that we have more liquidity”. Maybe someone here knows more about the tick size pilot mentioned?
Perhaps in theory, but not in practice. It took a $910 million class-action settlement to help change the collusion.
Also in practice I remember the spreads being much wider, e.g. $56 bid, $57 offer.
A smaller tick size does not necessarily translate into lower transaction costs for investors. Market makers' competition for queue priority will ensure that queues are over sized with liquidity when the tick size is larger, creating more adverse selection for market makers and lower transaction costs for investors. The mechanism of action is that queues remain sized with depth even when the fair price breaches the BBO because there remains a nontrivial probability that the fair price will bounce to the other side of the BBO, returning the market maker's posted size to positive expected value territory. Market makers have an incentive to do this when the tick size is large in order to preserve time priority.
This mechanism doesn't exist in small tick stocks since there is no cost to pull an order, since time priority doesn't exist in the limiting case of infinitesimally small tick sizes.
My experience has basically confirmed this theory. I have run market making algorithms in both small and large tick stocks and found both categories to be equally difficult. In fact it's hard to imagine it any other way, market makers will compete away any difference in edge, reducing the size of the pie to near zero in both tick size categories.
My findings here only apply to tick sizes below 100 bps.
I do however agree that it helps to create a speed arms race. But I see that as offset with more simplicity of larger tick sizes, order management is much easier.
> queues are over sized with liquidity when the tick size is larger, creating more adverse selection for market makers
That's not right when it comes to price-time priority (pro rata is a different story). Consider that statement in limit; if the queue was of infinite length, there would be no trade-through, and no adverse selection. I'm talking about the price level penetration/second price dynamics that are relevant to a tick size conversation, not the more nuanced FV/winners curse stuff. To your subsequent point about bid-ask bounce, that's precisely why being towards the front of the queue on a deep bid and ask is valuable.
> Market makers have an incentive to do this when the tick size is large in order to preserve time priority.
> I have run market making algorithms in both small and large tick stocks and found both categories to be equally difficult.I have run market making algorithms in both small and large tick stocks and found both categories to be equally difficult.
There's an efficient frontier parameterized (to first order) by the probability of a round trip (spread width or otherwise), the payout upon success, the loss upon failure, and the number of RT opportunities per trading day. As you noted there's not an easy way or a hard way given that these considerations are not orthogonal, and they're correlated with tick size.
I haven't seen this mentioned yet, so I'll mention it explicitly. As it stands the 611-612 series rules pertaining to sub-penny pricing allow for average price executions; a large percentage of U.S. equities volume already trades at mid or within the spread, both principally and on venue. There are far bigger fish to fry in the market structure debate.
True, but only lit decimal penny prices are protected by NBBO. Particularly when it comes to retail flow, that gives internalization pools a license to charge high prices by only having to match penny ticks.
If sub-penny was protected by NMS, then lit exchange quotes could narrow the NBBO spread, forcing the internalization pools to give retail traders much better pricing.
Also in general, I suspect price discovery is much less efficient when it occurs outside of lit quotes. If not just because, there’s no transparency on book pressure. But I’m definitely less confident about that assertion, and haven’t seen it quantified in any way.
E[PnL | is_filled_very_soon] and P(is_filled_very_soon) both decline monotonically in queue size assuming we remain at the back of the queue. E[PnL | is_filled_very_soon] starts off negative and gets more negative monotonically. P(is_filled_very_soon) starts high and asymptotes to 0 (but never reaches it assuming queue size remains finite).
I will elaborate on the mechanisms behind queues being over-sized in large tick names:
- I have estimation error in my theoretical fair value combined with an opportunity cost of pulling my order, so even if I think my order is negative EV I want to leave it in the queue unless that negative EV exceeds the opportunity cost. In small tick names I don't care about this estimation error because there's little to no opportunity cost of pulling an order, so I am much quicker to remove liquidity which incentives a sparse book.
- Even if I get filled in a large tick name there's often resting size behind me that lets me scratch out, especially if in the market I'm trading I get the ack before the print shows in the public market data (in this case I am only competing with firms that have canary orders and I'm not sure how common that is outside futures arb).
I personally see the biggest problem of large ticks being that it gives an advantage to prop firms that can pre-queue multiple levels in advance especially with GTC orders which gives them an advantage. I see the biggest problem of small ticks being that prop firms can dime genuine liquidity which is a form of front-running, again giving them a big advantage. Not sure exactly how this will net off in terms of rents collected.
> a large percentage of U.S. equities volume already trades at mid or within the spread, both principally and on venue. There are far bigger fish to fry in the market structure debate.
Agreed. Either way it's not very consequential. I'm 100% in favour of leaving it as-is in US equities though. I'm looking at the orderbooks now of cheap stocks with 50 bps tick sizes and the spreads are multi-ticks wide, so reducing that would achieve nothing except added complexity.
You’re right that market makers aren’t earning economic rents. At least at the margin. Since transaction costs are just the reverse of market maker profits (plus exchange fees and commissions), this would suggest that aggregate transaction costs would remain unchanged regardless of microstructure.
However this ignores the fact that trading volume is highly elastic to the price of liquidity. Moreover this is doubly true of non-toxic order flow. I.e. traders with only a very small edge are more sensitive to costs than those with a large one.
In other words smaller ticks would lead to much more trading volume, the bulk of which would be high quality and non-toxic.
In aggregate, market maker profits would not decrease. But they would be aggregated over a much larger pool of non-toxic flow. Thus the individual cost paid on any single trade would decrease. This isn’t just hand waving theory, it’s exactly what we witnessed when the equities converted from 1/16 ticks to decimization.
In econ terminology the problem with wide ticks isn’t the capture of economic rents, but the deadweight loss from price floors in an otherwise competitor market.
I don't agree that flow execution is going to be queueing much more than hitting in larger tick stocks relative to smaller tick stocks especially in US equities where the largest tick size is 100 bps and low price stocks have larger volatility. Most of these stocks have multi tick spreads anyway so reducing the tick size does nothing but increase complexity of order management.
The bid ask volume imbalance is often skewed in these stocks and you get less slippage by just lifting the entire queue (when the BBO imabalance points in your direction) and being bid-over with half of the average queue size and being front of queue even if you have a low information order you're trying to execute.
I can actually make a theoretical argument that large tick stocks is actually better for flow execution as follows: if the tick size is sufficiently large then I can be bid-over with size and not get dimed. Queue position is mine and can't be effectively stolen by market maker algos.
I think it's mostly a myth that execution slippage is lower when queueing versus hitting for these large tick names. The average slippage from VWAP from randomly queueing in a stock with a 20 bps tick size will be about 10 bps, roughly the same as if you're just crossing the spread with a large order. But of course this slippage can be massively reduced in both the queueing and hitting execution strategies with some simple heuristics. You can get some market data and backtest this slippage yourself.
The key is the total volume of execution flow is not fixed, but elastic to spread sizes. The total slippage captured by market makers and prop traders is higher in 2020 than it was in 1985. HFT firms make more more in aggregate than old school floor traders.
Yet, I think both you and I can agree that by any reasonable measure, tradings costs are orders of magnitude lower in 2020 than 1985. You have to be careful not to confuse an aggregate amount (total industry profit) with an unit cost (average cost to trade $X). The former can rise, even while the latter is falling. When demand is elastic (as it is for liquidity), suppliers make up for lower costs with higher volumes.
> being bid-over with half of the average queue size and being front of queue
This is a tangent, but I don’t think that works for ordinary traders in US equities. Even if you manage to simultaneously sweep the touch at every exchange, SIP is still going to lag and show a stale NBBO. Without an ISO (which civilians don’t have) the exchange is obligated to route the order to the stale liquidity. By the time NBBO updated, an HFT would have already grabbed the queue. I could be wrong though, so let me know if I’m missing something.
While the fragmentation of trading venues has led to competition on fees, in my experience it has actually widened the effective spread (spread + fees) and created a more difficult environment for customers.
Let's say a market-maker would like to trade 100 contracts. Any less than that is great, and any more than that is bad, as the counter-party is likely well informed about their trade. With one exchange, the market-maker can post their 100 contracts on that exchange, and be confident that the most risk they will take on is 100 contracts.
With multiple exchanges, the market-maker now needs to spread their exposure across exchanges (say, 10 contracts each on 15 exchanges), which can leave them (A) overexposed across exchanges, and (B) underexposed on any given exchange.
(A) causes a problem because the market-makers compensate for this risk by widening spreads. If the most sophisticated counterparties in the market can access liquidity across exchanges, the market-maker needs to at least statistically account for that possibility.
(B) causes a problem for customers with less sophisticated market access. In the example above, a customer with access to 1 of 15 exchanges only has the ability to trade 10 contracts, when they could trade 150 contracts with connectivity to all 15 exchanges.
For a real life example of where single listing works, you can currently trade over 20m of notional value with a spread of about .01 basis points in the ES future listed on the CME.
Source: HFT trader
Source: Used to be an HFT.
Broadly speaking, and as more fully discussed below, algorithmic trading in the equities and to a lesser extent—in the debt market, has improved many measures of market quality and liquidity provision during normal market conditions, though studies have also shown that some types of algorithmic trading may exacerbate periods of unusual market stress or volatility. Advances in technology and communications have enabled many market participants to more efficiently provide liquidity, more efficiently access market liquidity, implement new trading services, and more effectively manage risk across a range of
Today, algorithms address many of the problems and decisions that have long been central to the business of trading. What instrument(s) should be invested in or traded? What price should be bid or offered? What order size is optimal? What should be the response to a request for a quotation? What risk will be taken on by facilitating a trade? How does that risk change with the size of the trade? Is the risk of a trade appropriate to a firm’s available capital? What is the relationship between the price of different but related securities or financial products? To what market should an order be sent? Is it more effective to provide liquidity or demand liquidity? Should an order be displayed or non-displayed? To which broker should an order be sent? When should an order be submitted to a trading center?
That's interesting, and goes against the conventional wisdom that markets driven by automated, high-frequency trading will be much more volatile. I wonder if HFTs made major changes to avoid another Flash Crash.
Only if you ignore the word "normal". Otherwise it's what people have been saying for years - HFT reduces volatility when the seas are calm and amplifies it when they're choppy.
Volatile volitility reduction
For example, no one is listing buy orders for Apple at $1, even though everyone would love to buy Apple stock for one dollar. A human trader has enough sense to realize that most liquidity in any marketplace is off the order book, and act appropriately.
But even so, I don't anyone really believes that HFT creates higher average market volatility. Some freak occurrences have happened, but I'm sure they'll go down over time.
It doesn't have to do with how ETFs rebalance, rather how shares are created and destroyed.
Many of these quant/algorithmic trading firms are exploiting market inefficiencies at the millisecond (microsecond??) level. To the point that no human person is participating on the same timescale, yet every "real person's" trade gets just a little bit shaved off by some algorithm able to take advantage of it.
What would be so bad about putting in human-sized time intervals as the minimum trading frequency? Like, buyers and sellers can only be cleared on a 1.0 second frequency, or even 0.1 second frequency. So that the millions of times-per-second arms race is nullified.
Are these algorithmic trades serving some public good? Why not have the SEC / exchanges put in this delay for everyone's benefit, and not have these firms make $ off everyone?
Or is that not actually the main market inefficiency happening / being questioned here, and the bigger loss is bulk movements driven by algorithms over hours/days, not milliseconds?
First off when you enter an order it's probably being routed by an algorithm trying to get you the best price.
Second, if you do slow things down the way you propose, I don't see how it still won't be a speed arms race to be the first to get involved in the trade.
Speed is good. If you want to disincentive anything it should probably be excessive quoting.
Characteristics of a company being a sound investment change in the order of days, not sub-second.
Getting an up to date price? When I go to market to trade, I want the best price available right now. Not one from 5 minutes ago.
I'm still not sure what problem removing my ability to cross the spread whenever I like is meant to be solving.
What problem does it solve? Why should we have so many expert programmers and mathematicians spending their time competing in a 0-sum game that could simply run a bit slower and produce the same societal value?
There are a couple of large firms that make 5-6 figures a day from arbitraging the closing cross. I'm sure they'd love to run their strategies all day; whether they would make more money than HFTs make now is anybody's guess.
It is an interesting thought experiment though.. quantize time as well as price. You could exaggerate to one trade per day for the thought experiment. Now this would be on human scale so there is a lot of information that would build up (same when markets are closed). Between that and no time quantization like now, there are probably a few different positions (like electron orbitals) where positives and negatives vary quite a bit. Maybe one of them is better than the current system. Maybe not.
What problem is this solving?
Seems like a lot of faff to get people worse prices.
For example, a market maker could see stock A and B be quoted at a certain price, and because they have some correlation with stock C, the market maker can quote stock C based on the price of A and B; knowing if they get filled on their quote with C, that they would be fast enough to get a hedge in A and B. Conversely, if quotes for A or B move, they know they can be fast enough to adjust their quote for C.
If markets did not trade continuously, there would be no (soft) guarantee to the market maker that if they got filled on C that the quotes for A and B would still be there, and thus you would see less liquidity in all stocks because of this.
For instance you want to sell something. If you don’t have liquidity in what you are selling that means that you just have a few offers (which may not be enough for you to sell everything at once), and at various prices (so not really optimized for you). But with liquidity, here provided by HFT, you have enough quantity on the buying side to sell all at once and at a price that will be a lot more aggressively optimized.
So yes they will make some pennies on your order but it’s nothing compare to the broker you use and unless you use a bad broker optimizing fees is that last thing to think about. And on top of that they provides both liquidity and price optimization for you and the market.
They won't do a trade if there's no one to instantly sell to, so they aren't adding liquidity, or am I missing something?
I can address some parts of this.
"HFT works by reacting faster than another market participant"
- There are a bunch of different HFT strategies. In this case, we're usually talking about market making, which means you are placing limit orders at a price where you don't believe it will execute immediately. You can react to many things - price movements, events, anything.
- In placing a resting order, where is there an assumption that another participant was willing to do the same trade? Sometimes (pretty often) all the orders on a price level are HFT participants, and if you look at the market feed it's pretty easy to tell.
- In that case (which I claim is pretty common), the liquidity was not already there.
- If we consider that average spread sizes have reduced significantly with electronification and HFT, then you also disprove this assumption that "the liquidity was already there". Liquidity is not just the willingness to buy or sell, it's also the willingness to buy and sell at a competitive price. Otherwise, I mean, I'm always willing to pick up TSLA at $.01, and I'm always willing to sell TSLA at $500k. Doesn't mean I'm providing liquidity.
"They won't do a trade if there's no one to instantly sell to"
- That's simply not true. If you've taken a look at the order book, the fact that an HFT order is resting on the book (and not executed) means that there was nobody to instantly sell to.
- Are you suggesting that HFT firms all know that someone is going to come through and buy at a price level, and hop in? Flash Boys suggests this (and it's possible to infer some "whale" actions if they route their orders poorly), and that type of inference is possible sometimes, due in part to the way the US Equities ecosystem is set up. But I'll also ask you - have you thought about the order of operations in which someone might "know that there is someone to instantly sell to?" Consider the CME (futures exchange), where there's a ton of HFT, and for which there are no other markets. How is your sentence supposed to work?
- Also, perhaps empirically, that was simply not true for my firm, which was (and still is) a pretty successful one.
Yes, I'm very much a layman on this topic.
> placing limit orders at a price where you don't believe it will execute immediately.
Isn't that a "SFT" strategy? Why do you need super low latencies for something like that?
> Are you suggesting that HFT firms all know that someone is going to come through and buy at a price level, and hop in?
That's how I've commonly seen HFT being described, that these companies would see an order on exchange A and faster than anyone else would bid on exchanges B and C accordingly.
There are complex reasons why improving price discovery and price harmony between markets is good for the wider economy but they're a bit more hand-wavey...
For you this is only $9.00 more than before, but doing this all the time is where HFT makes money, without really offering any benefit to other market participants.
It can benefit stock prices for companies too, since it can cause the market to move more in their direction if machines think they should buy based on whatever they're programmed to look at.
I looked into this for a while, but I simply felt it was immoral and dropped the project.
which can also harm companies if algorithms decide to sell, which might trigger sell chain for everyone