Hacker News new | past | comments | ask | show | jobs | submit login
Staff Report on Algorithmic Trading in U.S. Capital Markets [pdf] (sec.gov)
223 points by ra7 26 days ago | hide | past | favorite | 87 comments



> They find that when the relative tick size, i.e. the minimum tick size increment divided by the stock price, is larger, HFTs compete more intensely to be the first one to the front of the limit order book queue in order to supply liquidity.

Although I think the SEC has generally done a good job in terms of market microstructure (especially in terms of resisting ill-informed populist rhetoric), I really wish they'd start exploring sub-penny tick sizes, at the very least on the most liquid low-priced stocks.

Since decimalization, it's been two decades since we've had a tick size reduction in US equities. When certain stocks sit at a one penny bid-ask spread all day, that's a sign that penny ticks are too economically wide. It's consumers and ordinary investors that pay the cost in terms of higher spreads.

You can think of the bid-ask spread as the cost of liquidity. It's how much more you pay for immediate execution. The tick size therefore acts like a price floor on the cost of liquidity. Why are we imposing a price floor, one that's much higher than the free market price, on end-consumers? If we reduced tick sizes from $0.01 to $0.001, the cost of trading for most retail investors on most stocks would fall by 50% or more.

It blunts the price discovery process. It incentives HFTs to compete in an arms race of speed to capture queue position, rather than provide better prices. The only group it really benefits are the incumbent exchanges (who can avoid competing with new entrants offering better pricing), and large, active portfolios, like hedge funds, that incur high market impact costs.


Decreasing the tick size is not automatically a better situation. Take for example the "hide not slide" order type offered by some exchanges several years ago[1]. Due to the liquidity rebates offered by exchanges (essentially get a very small rebate for having your standing limit order get filled by a marketable order) traders "fight" to get first in line to have their limit orders filled. Due to price-time priority, that would mean having placed your limit order at a given price earlier than someone else. (The hide-not-slide rule essentially circumvented this)

The only way other way to get priority is by bettering your price. If minimum tick size is set to a very low threshold (or no threshold at all), "outbidding" someone for a better place by offering a economically insignificant price improvement (say $0.000000001 per share) would become the easiest way to get price-time priority on someone, and hence the value of placing a standing limit order would be reduced: Anyone can just outbid you for a fraction of a fraction of a penny at any time, so it becomes much more likely that if your limit order gets filled, the price will then move against you, and otherwise, your limit order will never get filled. This problem exists today but would probably be made worse by a smaller tick size - in fact I believe it's one reason why liquidity rebates are offered.

1. https://www.bloomberg.com/opinion/articles/2015-01-13/hide-n...


I’m sorry I’m not sure I understand the relationship you implying between “hide not slides” and small tick sizes.

Hide not slides were used when you didn’t want to move the price at all. If you could move the price a little bit you’d get many of the benefits of a hns without the negative externalities.


Here, here.

Curiously, the commodities markets have always adjusted tick sizes for different contracts so that the dollar value is reasonably consistent (~10 dollars). These are products for which the contract size (in dollars) varies wildly and the quotes are all over the place (anywhere from 9 decimal points to no decimal points).

It makes writing your algos confusing, esp when you start doing inter-commodity spreads, but the business case is so obvious, it really makes you wonder why the equities markets haven't done more.


Off-topic: the expression of agreement is "hear, hear" not "here, here" https://www.grammarly.com/blog/here-here-vs-hear-hear/


Some dark pools had sub-penny spreads for awhile, but the SEC fined them because this is illegal. I'm not sure what material kind of harm they think took place, but whatever I guess.


How do you settle the purchase of one share with a price of 20.0006? It’s a price. You can’t really go lower than monetary units.

If you round it, you create a risk free arbitrage opportunity.


How's this different from in FX where someone buys EURUSD in USD quantity? Venues trade EURUSD not USDEUR. So they want to trade 1000 dollars. Current fx rate is 1.18162. So that's 846.2957634434 euros. Minimum settlement in euros is 1 cent, so what is done with the 0.57 of a euro cent or what happens to the difference between 1000 USD and 999.9931898? I think the answer to this is bankers rounding.


It's called an "FX Rate" not an "FX price". You apply a rounding as part of the transaction. The amount settled is not a rate, it is a monetary amount.

The price quoted on the market is the amount settled for one share. If it is not, then it is not really the price of one share.


There are already contracts trading at fractional cent prices. For an example see treasury bonds listed on CME. According to CME document[1] margin and option premium calculations round the amount to nearest cent. I suppose the same happens at settlement.

[1] https://www.cmegroup.com/trading/interest-rates/files/Treasu...


But then you can arbitrage it. Buy 10,000 times 1 share, sell 10,000 shares. You can't really do that in the bond market, but you can in the equity market.


I think practically this would have to be solved by the equity brokers. All the intermediary accounting done in fractions and only rounded once cash leaves the broker.


I agree with you, but there is an interesting though experiment. What if we made tick sizes $.000000000000001?

At what point do we get a diminishing return? At what point is it so small that it blows up the algorithms?


There is some interesting research regarding the optimal tick size (by instrument), eg https://arxiv.org/abs/1207.6325


Well it might blow up some software not expecting these tiny tick sizes but there is an economic motive for larger tick sizes.

Traders can compete on

- price

- queue position

Small ticks mean they compete on price and big ticks mean they compete on queue position - to trade they need to have an order on the book, offering to trade at the current tick, that is ahead of other orders.

It's easy to see how the market benefits if people compete on price. However, it also benefits if people show how much they are willing to buy and sell. No sophisticated trader wants to reveal that as they will be taken advantage of when they are wrong. By having bigger tick sizes you incentivise people to try to get into the queue at these artificially better prices - it pulls liquidity into the open.

The tick size pilot that concluded in 2019 shows how this balance isn't easy to strike https://www.finra.org/rules-guidance/key-topics/tick-size-pi...


Thank you for this simple explanation.

Lowering tick size helps price discovery, but hurts open liquidity.

Is there some ideal constant average number of ticks between the bid and ask which optimizes this tradeoff given a desired state of liquidity and price discovery?


They wouldn't blow up, but if the tick size is too small there's no point in displaying orders on the book and you end up with less liquidity.


Actually they went the other way with nickel wide spreads.

Plus you can trade at sub penny all day long. You just can't quote. So I don't think that hinders price discovery one bit, even on SPY trading at $350


Can you give an example of a "most liquid low-priced stocks" that "sit at a one penny bid-ask spread all day"? Correct me if I'm wrong, but you mean that it's the same penny bid-ask spread right? Like, 10.49-10.50 for 6.5 hours as opposed to a tight spread that moves.

Can you source this:

> If we reduced tick sizes from $0.01 to $0.001, the cost of trading for most retail investors on most stocks would fall by 50% or more.

If that's true, it would seem to imply that trading costs are so low that it doesn't matter.


No, it does not sit at the same 1-tick spread all day. Typically for these tickers the 1-tick spread transitions very quickly to the 1-tick spread above or below (each individual market is going to be 2-tick wide for a very small amount of time at each tick transition, much less than 1 second).


They could just reverse split if it’s such a big deal


The other option that Matt Levine has been writing about more lately is no longer having integer number of shares (jedberg’s precision question is similar).

Amusingly, there are folks who take the other side of this argument, and very drastically like Scott Kupor at Andreessen Horowitz [1]

> How did this happen? Because decimalization reduced the “tick size,” the minimum increment in which stock prices can trade, to a penny (from its previous level of 25 cents). Thus, a trader who previously might have purchased a block of small-cap shares knowing that a $0.25 tick size likely represented his minimum profit potential on a trade now found his minimum profit potential reduced to a penny. Facing this uneconomic situation, small-cap traders simply abandoned the market, killing liquidity for these stocks.

More recently, that was testimony to the SEC [2]. It’s certainly true that small-cap / illiquid stocks are illiquid. But it seems like the argument is “we should intentionally have market inefficiency for market makers to collect so that we have more liquidity”. Maybe someone here knows more about the tick size pilot mentioned?

[1] https://blog.pmarca.com/2013/03/26/unshackle-the-middle-clas...

[2] https://a16z.com/2019/06/20/capital-formation-smaller-compan...


Before decimilization, the minimum increment was 1/16th, not 25 cents or 1/4th.


Before decimilization, the minimum increment was 1/16th

Perhaps in theory, but not in practice. It took a $910 million class-action settlement to help change the collusion.

https://www.economist.com/finance-and-economics/1998/01/15/c...

Also in practice I remember the spreads being much wider, e.g. $56 bid, $57 offer.


As an industry practitioner I disagree with this but I can see why first order thinking will lead to that conclusion.

A smaller tick size does not necessarily translate into lower transaction costs for investors. Market makers' competition for queue priority will ensure that queues are over sized with liquidity when the tick size is larger, creating more adverse selection for market makers and lower transaction costs for investors. The mechanism of action is that queues remain sized with depth even when the fair price breaches the BBO because there remains a nontrivial probability that the fair price will bounce to the other side of the BBO, returning the market maker's posted size to positive expected value territory. Market makers have an incentive to do this when the tick size is large in order to preserve time priority.

This mechanism doesn't exist in small tick stocks since there is no cost to pull an order, since time priority doesn't exist in the limiting case of infinitesimally small tick sizes.

My experience has basically confirmed this theory. I have run market making algorithms in both small and large tick stocks and found both categories to be equally difficult. In fact it's hard to imagine it any other way, market makers will compete away any difference in edge, reducing the size of the pie to near zero in both tick size categories.

My findings here only apply to tick sizes below 100 bps.

I do however agree that it helps to create a speed arms race. But I see that as offset with more simplicity of larger tick sizes, order management is much easier.


Also an industry practitioner, with an auction theory/mechanism design background. I agree with portions of what you said, but:

> queues are over sized with liquidity when the tick size is larger, creating more adverse selection for market makers

That's not right when it comes to price-time priority (pro rata is a different story). Consider that statement in limit; if the queue was of infinite length, there would be no trade-through, and no adverse selection. I'm talking about the price level penetration/second price dynamics that are relevant to a tick size conversation, not the more nuanced FV/winners curse stuff. To your subsequent point about bid-ask bounce, that's precisely why being towards the front of the queue on a deep bid and ask is valuable.

> Market makers have an incentive to do this when the tick size is large in order to preserve time priority.

> I have run market making algorithms in both small and large tick stocks and found both categories to be equally difficult.I have run market making algorithms in both small and large tick stocks and found both categories to be equally difficult.

There's an efficient frontier parameterized (to first order) by the probability of a round trip (spread width or otherwise), the payout upon success, the loss upon failure, and the number of RT opportunities per trading day. As you noted there's not an easy way or a hard way given that these considerations are not orthogonal, and they're correlated with tick size.

I haven't seen this mentioned yet, so I'll mention it explicitly. As it stands the 611-612 series rules pertaining to sub-penny pricing allow for average price executions; a large percentage of U.S. equities volume already trades at mid or within the spread, both principally and on venue. There are far bigger fish to fry in the market structure debate.


> As it stands the 611-612 series rules pertaining to sub-penny pricing allow for average price executions; a large percentage of U.S. equities volume already trades at mid or within the spread, both principally and on venue.

True, but only lit decimal penny prices are protected by NBBO. Particularly when it comes to retail flow, that gives internalization pools a license to charge high prices by only having to match penny ticks.

If sub-penny was protected by NMS, then lit exchange quotes could narrow the NBBO spread, forcing the internalization pools to give retail traders much better pricing.

Also in general, I suspect price discovery is much less efficient when it occurs outside of lit quotes. If not just because, there’s no transparency on book pressure. But I’m definitely less confident about that assertion, and haven’t seen it quantified in any way.


> Consider that statement in limit; if the queue was of infinite length, there would be no trade-through, and no adverse selection.

E[PnL | is_filled_very_soon] and P(is_filled_very_soon) both decline monotonically in queue size assuming we remain at the back of the queue. E[PnL | is_filled_very_soon] starts off negative and gets more negative monotonically. P(is_filled_very_soon) starts high and asymptotes to 0 (but never reaches it assuming queue size remains finite).

I will elaborate on the mechanisms behind queues being over-sized in large tick names:

- I have estimation error in my theoretical fair value combined with an opportunity cost of pulling my order, so even if I think my order is negative EV I want to leave it in the queue unless that negative EV exceeds the opportunity cost. In small tick names I don't care about this estimation error because there's little to no opportunity cost of pulling an order, so I am much quicker to remove liquidity which incentives a sparse book.

- Even if I get filled in a large tick name there's often resting size behind me that lets me scratch out, especially if in the market I'm trading I get the ack before the print shows in the public market data (in this case I am only competing with firms that have canary orders and I'm not sure how common that is outside futures arb).

I personally see the biggest problem of large ticks being that it gives an advantage to prop firms that can pre-queue multiple levels in advance especially with GTC orders which gives them an advantage. I see the biggest problem of small ticks being that prop firms can dime genuine liquidity which is a form of front-running, again giving them a big advantage. Not sure exactly how this will net off in terms of rents collected.

> a large percentage of U.S. equities volume already trades at mid or within the spread, both principally and on venue. There are far bigger fish to fry in the market structure debate.

Agreed. Either way it's not very consequential. I'm 100% in favour of leaving it as-is in US equities though. I'm looking at the orderbooks now of cheap stocks with 50 bps tick sizes and the spreads are multi-ticks wide, so reducing that would achieve nothing except added complexity.


You’re not considering the second order effect of smaller spreads inducing more volume in non-toxic order flow.

You’re right that market makers aren’t earning economic rents. At least at the margin. Since transaction costs are just the reverse of market maker profits (plus exchange fees and commissions), this would suggest that aggregate transaction costs would remain unchanged regardless of microstructure.

However this ignores the fact that trading volume is highly elastic to the price of liquidity. Moreover this is doubly true of non-toxic order flow. I.e. traders with only a very small edge are more sensitive to costs than those with a large one.

In other words smaller ticks would lead to much more trading volume, the bulk of which would be high quality and non-toxic.

In aggregate, market maker profits would not decrease. But they would be aggregated over a much larger pool of non-toxic flow. Thus the individual cost paid on any single trade would decrease. This isn’t just hand waving theory, it’s exactly what we witnessed when the equities converted from 1/16 ticks to decimization.

In econ terminology the problem with wide ticks isn’t the capture of economic rents, but the deadweight loss from price floors in an otherwise competitor market.


What's deadweight loss? Trading is zero sum dollars (gross fees) and positive sum utility. If execution flow is paying more slippage in large tick stocks then that PnL must be going to someone's pocket (market maker or short term scalper or prop firm)

I don't agree that flow execution is going to be queueing much more than hitting in larger tick stocks relative to smaller tick stocks especially in US equities where the largest tick size is 100 bps and low price stocks have larger volatility. Most of these stocks have multi tick spreads anyway so reducing the tick size does nothing but increase complexity of order management.

The bid ask volume imbalance is often skewed in these stocks and you get less slippage by just lifting the entire queue (when the BBO imabalance points in your direction) and being bid-over with half of the average queue size and being front of queue even if you have a low information order you're trying to execute.

I can actually make a theoretical argument that large tick stocks is actually better for flow execution as follows: if the tick size is sufficiently large then I can be bid-over with size and not get dimed. Queue position is mine and can't be effectively stolen by market maker algos.

I think it's mostly a myth that execution slippage is lower when queueing versus hitting for these large tick names. The average slippage from VWAP from randomly queueing in a stock with a 20 bps tick size will be about 10 bps, roughly the same as if you're just crossing the spread with a large order. But of course this slippage can be massively reduced in both the queueing and hitting execution strategies with some simple heuristics. You can get some market data and backtest this slippage yourself.


> Trading is zero sum dollars (gross fees) and positive sum utility. If execution flow is paying more slippage in large tick stocks then that PnL must be going to someone's pocket (market maker or short term scalper or prop firm)

The key is the total volume of execution flow is not fixed, but elastic to spread sizes. The total slippage captured by market makers and prop traders is higher in 2020 than it was in 1985. HFT firms make more more in aggregate than old school floor traders.

Yet, I think both you and I can agree that by any reasonable measure, tradings costs are orders of magnitude lower in 2020 than 1985. You have to be careful not to confuse an aggregate amount (total industry profit) with an unit cost (average cost to trade $X). The former can rise, even while the latter is falling. When demand is elastic (as it is for liquidity), suppliers make up for lower costs with higher volumes.

> being bid-over with half of the average queue size and being front of queue

This is a tangent, but I don’t think that works for ordinary traders in US equities. Even if you manage to simultaneously sweep the touch at every exchange, SIP is still going to lag and show a stale NBBO. Without an ISO (which civilians don’t have) the exchange is obligated to route the order to the stale liquidity. By the time NBBO updated, an HFT would have already grabbed the queue. I could be wrong though, so let me know if I’m missing something.


> Today’s equity market structure is highly fragmented, consisting of fifteen national securities exchanges, over thirty alternative trading systems, multiple single-dealer platforms within broker-dealers, and other forms of order matching

While the fragmentation of trading venues has led to competition on fees, in my experience it has actually widened the effective spread (spread + fees) and created a more difficult environment for customers.

Let's say a market-maker would like to trade 100 contracts. Any less than that is great, and any more than that is bad, as the counter-party is likely well informed about their trade. With one exchange, the market-maker can post their 100 contracts on that exchange, and be confident that the most risk they will take on is 100 contracts.

With multiple exchanges, the market-maker now needs to spread their exposure across exchanges (say, 10 contracts each on 15 exchanges), which can leave them (A) overexposed across exchanges, and (B) underexposed on any given exchange.

(A) causes a problem because the market-makers compensate for this risk by widening spreads. If the most sophisticated counterparties in the market can access liquidity across exchanges, the market-maker needs to at least statistically account for that possibility.

(B) causes a problem for customers with less sophisticated market access. In the example above, a customer with access to 1 of 15 exchanges only has the ability to trade 10 contracts, when they could trade 150 contracts with connectivity to all 15 exchanges.

For a real life example of where single listing works, you can currently trade over 20m of notional value with a spread of about .01 basis points in the ES future listed on the CME.

Source: HFT trader


The CME’s monopoly definitely causes problems though. For instance it’s much more expensive in fees to trade there and your recourse when you observe shenanigans is less.

Source: Used to be an HFT.


The "effective" count of trading venues is really 3: nyse, nasdaq, cboe. There's not much fragmentation really. Having multiple venues let's exchanges experiment with different price structures and technologies. Like inverted maker taker fees. These things bring more benefits to the consumer.


Yeah it's clear that centralized markets would hurt HFT and benefit most institutional and retail investors. Citadel and the like have strong incentives to resist any consolidation in the market place. After all, the market makers and latency arb guys are the ones bearing that multiple exchange risk (and thus are compensated for it), not the buyside investor.


The money quotes:

Broadly speaking, and as more fully discussed below, algorithmic trading in the equities and to a lesser extent—in the debt market, has improved many measures of market quality and liquidity provision during normal market conditions, though studies have also shown that some types of algorithmic trading may exacerbate periods of unusual market stress or volatility. Advances in technology and communications have enabled many market participants to more efficiently provide liquidity, more efficiently access market liquidity, implement new trading services, and more effectively manage risk across a range of markets.

Today, algorithms address many of the problems and decisions that have long been central to the business of trading. What instrument(s) should be invested in or traded? What price should be bid or offered? What order size is optimal? What should be the response to a request for a quotation? What risk will be taken on by facilitating a trade? How does that risk change with the size of the trade? Is the risk of a trade appropriate to a firm’s available capital? What is the relationship between the price of different but related securities or financial products? To what market should an order be sent? Is it more effective to provide liquidity or demand liquidity? Should an order be displayed or non-displayed? To which broker should an order be sent? When should an order be submitted to a trading center?


"Although some studies argue otherwise, a number of academic papers study the effects of algorithmic trading and high-frequency trading on volatility in equity markets and find evidence that, under normal market conditions, they reduce short term volatility."

That's interesting, and goes against the conventional wisdom that markets driven by automated, high-frequency trading will be much more volatile. I wonder if HFTs made major changes to avoid another Flash Crash.


>That's interesting, and goes against the conventional wisdom that markets driven by automated, high-frequency trading will be much more volatile.

Only if you ignore the word "normal". Otherwise it's what people have been saying for years - HFT reduces volatility when the seas are calm and amplifies it when they're choppy.


I'm curious about the number of studies and academic papers ("some" vs. "a number")


There's also the caveat: "However, there is some evidence, mostly from the Flash Crash, that in certain instances algorithmic trading and HFTs may exacerbate price movements during periods of high volatility or market stress."

Volatile volitility reduction


The heart of the problem isn't with HFT, it's that an order book isn't reflective of real supply and demand. A human realizes that, while HFT algos sometimes don't.

For example, no one is listing buy orders for Apple at $1, even though everyone would love to buy Apple stock for one dollar. A human trader has enough sense to realize that most liquidity in any marketplace is off the order book, and act appropriately.

But even so, I don't anyone really believes that HFT creates higher average market volatility. Some freak occurrences have happened, but I'm sure they'll go down over time.


Lower volatility would help the HFTs with price forecasting and operating with more certainty to not wildly swing their portfolios.


Never forget that ETFs are only really possible due to HFT. Most ETF users do not even realise this, unless they ask themselves "how do ETFs actually work", and they start reading about Authorized Participants.


Can you expand on this? From my understanding ETF's re-balance quite slowly, once every few months, how does HFT even help, much less make ETF's possible?


ETF shares are created and destroyed all the, bringing their value inline with the underlying. So for a S&P 500 ETF, HFT firms are creating shares when the price of the ETF is too low and destroying them when they are too high.

It doesn't have to do with how ETFs rebalance, rather how shares are created and destroyed.


While I read the report, here is a question I would love to know more about:

Many of these quant/algorithmic trading firms are exploiting market inefficiencies at the millisecond (microsecond??) level. To the point that no human person is participating on the same timescale, yet every "real person's" trade gets just a little bit shaved off by some algorithm able to take advantage of it.

What would be so bad about putting in human-sized time intervals as the minimum trading frequency? Like, buyers and sellers can only be cleared on a 1.0 second frequency, or even 0.1 second frequency. So that the millions of times-per-second arms race is nullified.

Are these algorithmic trades serving some public good? Why not have the SEC / exchanges put in this delay for everyone's benefit, and not have these firms make $ off everyone?

Or is that not actually the main market inefficiency happening / being questioned here, and the bigger loss is bulk movements driven by algorithms over hours/days, not milliseconds?


The root premise of your argument is faulty: because humans can't trade as fast as computers then humans must be getting stiffed on price. Why would that be?

First off when you enter an order it's probably being routed by an algorithm trying to get you the best price.

Second, if you do slow things down the way you propose, I don't see how it still won't be a speed arms race to be the first to get involved in the trade.

Speed is good. If you want to disincentive anything it should probably be excessive quoting.


Speed is good for what?

Characteristics of a company being a sound investment change in the order of days, not sub-second.


> Speed is good for what?

Getting an up to date price? When I go to market to trade, I want the best price available right now. Not one from 5 minutes ago.


But we're not discussing on the scale of minutes, we're discussing sub-second intervals that aren't even perceptible to humans.


This whole argument seems to boil down to "I don't understand why anyone would want to trade more often than X, therefore nobody should". What's the difference between 5 minutes and any other arbitrary time period, other than hand-wavey notions about "perception"?

I'm still not sure what problem removing my ability to cross the spread whenever I like is meant to be solving.


> I'm still not sure what problem removing my ability to cross the spread whenever I like is meant to be solving.

What problem does it solve? Why should we have so many expert programmers and mathematicians spending their time competing in a 0-sum game that could simply run a bit slower and produce the same societal value?


Flip it around, what economic problem does enabling you to trade more often than once per day (say) solve?


Trading more frequently reduces uncertainty (it's harder to predict 24 hours out vs. 1 minute out)and uncertainty equates to risk which means trading only once per day could increase the cost of capital.


You're basically asking "What economic benefit does a free market bring?" at this point.


It's easy to list the benefits of free markets, I don't think it's easy to do the same for sub-second regulated markets vs 1 second regulated markets.


You need to walk through your assumptions about how the algos are “shaving” off humans.


So what is algorithmic trading doing that's undesirable?


IEX does something like this WRT a minimum delay. Tom Scott has a video on it:

https://www.youtube.com/watch?v=d8BcCLLX4N4


It’s not human scale though. It’s “big market participants computers scale”.


If a market did as you suggested, then there would simply be no orders for 0.99999 seconds and then a huge flood of orders at 0.000001 seconds before execution happens.


It gets a little complicated: do you want to continuously display market data and only clear trades once a second, effectively having a visible two-sided auction once a second? Or should it not be visible? (In which case you're no longer giving market data to all market participants, something they'd rather see)?

There are a couple of large firms that make 5-6 figures a day from arbitraging the closing cross. I'm sure they'd love to run their strategies all day; whether they would make more money than HFTs make now is anybody's guess.


Not if they couldn’t be processed in that time.

It is an interesting thought experiment though.. quantize time as well as price. You could exaggerate to one trade per day for the thought experiment. Now this would be on human scale so there is a lot of information that would build up (same when markets are closed). Between that and no time quantization like now, there are probably a few different positions (like electron orbitals) where positives and negatives vary quite a bit. Maybe one of them is better than the current system. Maybe not.


But.... Why?

What problem is this solving?

Seems like a lot of faff to get people worse prices.


I'm talking about all bids and asks being aggregated in the given window, and matched/executed at the end of it at once.


That would result in what was described above. You are taking unnecessary risk by committing to place an order at some point in the future. You might not be so sure a second from now.


Market participants run the gamut from seasoned rational professionals to knee-jerk reactive amateurs. Professionals are the ones who are willing to take the other side during short term movements for long term profits. This softens the blow and improves markets for everybody. One thing that is left out of these discussions is that many of the market inefficiencies we're talking about are relatively uninformed orders trying to buy/sell at any price, which the professionals are racing against each other to trade with. In many cases, they're literally paying the brokerages for these orders. If modeling helped you be right 60% of the time, you would want to flip the coin as often possible.


A market maker tries to keep market neutral, but obviously that's impossible to do continuously. So they try to limit their exposure by being hedged all the time. The simplest hedge is obviously to hedge a stock against itself and thus not hold any inventory, but that is often impossible. A different way of hedging would be to hedge against correlated stocks based on a model you have.

For example, a market maker could see stock A and B be quoted at a certain price, and because they have some correlation with stock C, the market maker can quote stock C based on the price of A and B; knowing if they get filled on their quote with C, that they would be fast enough to get a hedge in A and B. Conversely, if quotes for A or B move, they know they can be fast enough to adjust their quote for C.

If markets did not trade continuously, there would be no (soft) guarantee to the market maker that if they got filled on C that the quotes for A and B would still be there, and thus you would see less liquidity in all stocks because of this.


Side question, can anybody recommend a good brokerage for running my own trading algorithm? I'm getting tired of inputting very similar trades every day and being limited by the types of limit orders they allow in after hours trading. So far it looks like the winner might be Interactive Brokers or maybe Centerpoint (if I meet their guidelines).


interactive brokers is really the best you're going to get. Its a low level API which is generally good. Terrible documentation, in fact the worst documentation I've ever seen on such a large project, but once you figure it out it'll give you all the functionality needed.


I like Alpaca. No fees and comes with a Python library called pylivetrader that is easy to use.


I use Interactive Brokers for the same purpose. They're pretty much your only option :)


Related to a new kind of algorithm being used in the market 'Dexamethasone Announcement Could Have Made Hedge Funds A Fortune — Alpha Week' https://www.alpha-week.com/dexamethasone-announcement-could-...


While we’re on this topic, can some expert ELI5 how or/why HFT is good for markets/economy?


It provides liquidity to the market.

For instance you want to sell something. If you don’t have liquidity in what you are selling that means that you just have a few offers (which may not be enough for you to sell everything at once), and at various prices (so not really optimized for you). But with liquidity, here provided by HFT, you have enough quantity on the buying side to sell all at once and at a price that will be a lot more aggressively optimized.

So yes they will make some pennies on your order but it’s nothing compare to the broker you use and unless you use a bad broker optimizing fees is that last thing to think about. And on top of that they provides both liquidity and price optimization for you and the market.


But HFT works by reacting faster than another market participant willing to do the same trade, so the liquidity was already there.

They won't do a trade if there's no one to instantly sell to, so they aren't adding liquidity, or am I missing something?


I can see how you can come to this conclusion from this sentence, but it's a vague sentence with slightly-wrong premises, which leads you to this conclusion. If you're a programmer, I think you may appreciate this situation when others talk about your work.

I can address some parts of this.

"HFT works by reacting faster than another market participant"

- There are a bunch of different HFT strategies. In this case, we're usually talking about market making, which means you are placing limit orders at a price where you don't believe it will execute immediately. You can react to many things - price movements, events, anything. - In placing a resting order, where is there an assumption that another participant was willing to do the same trade? Sometimes (pretty often) all the orders on a price level are HFT participants, and if you look at the market feed it's pretty easy to tell. - In that case (which I claim is pretty common), the liquidity was not already there. - If we consider that average spread sizes have reduced significantly with electronification and HFT, then you also disprove this assumption that "the liquidity was already there". Liquidity is not just the willingness to buy or sell, it's also the willingness to buy and sell at a competitive price. Otherwise, I mean, I'm always willing to pick up TSLA at $.01, and I'm always willing to sell TSLA at $500k. Doesn't mean I'm providing liquidity.

"They won't do a trade if there's no one to instantly sell to"

- That's simply not true. If you've taken a look at the order book, the fact that an HFT order is resting on the book (and not executed) means that there was nobody to instantly sell to. - Are you suggesting that HFT firms all know that someone is going to come through and buy at a price level, and hop in? Flash Boys suggests this (and it's possible to infer some "whale" actions if they route their orders poorly), and that type of inference is possible sometimes, due in part to the way the US Equities ecosystem is set up. But I'll also ask you - have you thought about the order of operations in which someone might "know that there is someone to instantly sell to?" Consider the CME (futures exchange), where there's a ton of HFT, and for which there are no other markets. How is your sentence supposed to work? - Also, perhaps empirically, that was simply not true for my firm, which was (and still is) a pretty successful one.


> I think you may appreciate this situation when others talk about your work.

Yes, I'm very much a layman on this topic.

> placing limit orders at a price where you don't believe it will execute immediately.

Isn't that a "SFT" strategy? Why do you need super low latencies for something like that?

> Are you suggesting that HFT firms all know that someone is going to come through and buy at a price level, and hop in?

That's how I've commonly seen HFT being described, that these companies would see an order on exchange A and faster than anyone else would bid on exchanges B and C accordingly.


It shrinks spreads saving investors money. It increases liquidity for smaller orders at least (good for retail investors).

There are complex reasons why improving price discovery and price harmony between markets is good for the wider economy but they're a bit more hand-wavey...


I'll give it a shot. When you place an order for 1000 shares of AAPL for example your order is sent to a bunch of exchanges and you are supposed to get your shares from the 1000 cheapest shares available to be purchased. If your order gets to exchange A where a HFT is operating they might sell you 100 shares at $131.99, but then the know there is someone buying AAPL. they send orders on faster networks than you to buy up all the shares at other exchanges for 131.99, then list them for sale for 132.00. Your order arrives, and depending on order type you'll probably buy those up that milliseconds earlier would have been a penny cheaper.

For you this is only $9.00 more than before, but doing this all the time is where HFT makes money, without really offering any benefit to other market participants.


HFT is specifically a tool that benefits the haves (HFT) over the have nots.

It can benefit stock prices for companies too, since it can cause the market to move more in their direction if machines think they should buy based on whatever they're programmed to look at.

I looked into this for a while, but I simply felt it was immoral and dropped the project.


> It can benefit stock prices for companies too, since it can cause the market to move more in their direction

which can also harm companies if algorithms decide to sell, which might trigger sell chain for everyone


It's clear that you have no idea what HFT firms actually do. Which is okay, but then don't pass judgment.


Any highlights or summarizations?


Haven't read it, but there's a summary section in the attached PDF on page 69.


The algos being talked about here do not add true liquidity. The fact that over 2/3 of investable wealth is parked in passive funds would suggest there’s not a lot of trading being done (2020 craze excepted). Therefore, what’s happened in the past decade is artificial liquidity with very sharp unexpected moves. Some of the biggest algos I’ve seen proliferate and capture exactly this phenomenon. Started in 2015 with year ends being targeted (see 2018, 2019 and 2020). Of course, the pandemic unleashed a different beast altogether.


those passive funds still need to continuously rebalance their holdings.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: