I have yet to see a convincing rationale for how high frequency trading adds value to the system - it certainly doesn't seem to add pricing stability. Because this is hacker news, I suspect a few people reading this either work in the industry and/or have strong opinions about it, and I would love to hear why I am wrong. Because I do like it when clever computer scientists make money, and that seems to be pretty much the only social benefit of HFT, at the cost of flash crashes and things like the example above.
This implies that humans are actually better than machines at making data-driven decisions at high rates. They're not. They're astoundingly not once you compute the cost of humans versus the cost of machines on a per-decision basis.
Liquidity in the market used to be provided by large groups of sweaty, overpaid alpha males yelling at each other. We replaced them with dueling robots. The robots can provide liquidity for a fraction of the price. The alpha males really, really hate competing with robots, because the alpha males invariably lose, so they complain that robots are stealing the money that the alpha males used to extract from their customers by right of being the one with a license to be shouting and sweaty at a particular physical location.
Occasionally a robot blows up. Not a problem -- robots are easy to replace. Besides, humans blow up all the time. We only ignore their ridiculous strictly-inferior-in-every-way-at-this-task nature because they look more like us than the robots do, and because these particular humans being displaced used to be rich, whereas e.g. telephone operators tasked with manually doing call routing (also clearly inferior to highly reliable distributed algorithms) were poor.
But when a computer screws up an edge case, versus the many, many times a doctor will screw up a diagnosis, which instance will get more attention. People are scared of black box machines and will take particular note of the times an algorithm has screwed them over, regardless of the times a human has screwed them up.
Also, when the possibility of examination exists, computerized decision making is far, far easier to audit (i.e. determine blame)
Cars with human drivers kill thousands, but nobody blinks. The day a robotic car kills a single person, all the news networks will have a field day.
As more robots crowd into the very fast strategy space, there tends to be more collision of strategies and thus more extreme price movements as many robots all try to buy/sell the same thing at the same time. If they all pile onto one side of the market, liquidity disappears and price movements become dramatic. In the very fast strategy space, there is less capability for complex strategies requiring long memories. If you want to save some nanoseconds to win at this winner-take-all market, you need to reduce your memory and processing time.
This isn't saying that humans would never display herd behavior, but that they do so less often.
However, they do it (by definition) on a time scale that is perceptible to other humans.
Edit to add: The dueling robot at HNsearch says a) it was tptacek and b) I should trade with it because it remembers HN comments better than I could ever hope to.
A lot of this "but ... but ... but ... we NEED a human in the loop" handwringing that we're seeing on the news channels is obsolete people who used to be ridiculously overpaid complaining that they have been replaced by fast computers and efficient code.
(Disclaimer: Non-sweaty alpha male HFT algorithm writer here ... ha.)
This $440m example shows that a human should be thrown into the mix. Let the human/machine strengths cancel out each other's weaknesses.
For a period after the introduction of autoquote in 2003, providers of liquidity captured most of the surplus and enjoyed larger margins on trades (See Figure 3). However, this advantage quickly dissipated as more parties implemented AT. The first-mover advantage doesn't apply in the world of equity markets; competitors quickly duplicated AT strategies and competition swiftly lowered spreads in the second half of 2003. Today, spreads on equities are much lower than pre-2003 largely thanks to AT.
As for pricing stability, let me disregard AT glitches for the moment. Algorithms can tirelessly monitor market information, whether media reporting, filings, event rumors (eg M&A), order trends, etc. Humans are are relatively constricted to a few information sources when executing trades in comparison to AT. In addition, AT reacts faster to new information sources and can adjust bid/ask near-instantly. Therefore, price volatility increases as a result of increased information efficiency.
Glitches and fast-crashes are a negative counter-example to the information efficiency argument above. I leave it to the reader to decide if liquidity benefits justify the occasional flash-crash. However, recognize that this phenomena is not exclusive to AT: many human traders have caused similar crashes of their own -- I'm looking at you London Whale.
Tens of billions? Hundreds?
You write a risk check gateway once. It rarely if ever needs to be touched.
If you do this right, you roll out changes on one machine with strict risk controls (1000 share position, for example) to prevent such a blowout. Then you transition everything else.
Ironically, this should be exactly what other tech firms should do: roll out changes to a small area, check basic problems, and then move on to a larger rollout.
Such checks are only as good as the imagination of the author.
> Then you transition everything else.
Which is all well and good until you have an issue where two independent systems creat a feedback loop. Such problems are only evident 'at scale'.
I'm not blaming or trying to exonerate anyone. I love armchair quarterbacking as much as the next guy but a small trading shop runs a little differently than america's largest electronic market maker.
>Such checks are only as good as the imagination of the author.
SEC and FINRA regulations require very specific risk checks. One of those involves looking at potential positions if all of your (buy or sell) orders get filled. You have to sum your max potential gross position across all symbols and decide if you will trip a limit. If so you have to halt.
This is not imagination; this is codified. If you send an order for 100 shares, you HAVE TO ASSUME they were filled when you send the next order. That's the rule. This was not left to imagination.
"Which is all well and good until you have an issue where two independent systems creat a feedback loop. Such problems are only evident 'at scale'."
There should be no feedback where the trading engine affects risk check. In fact, the new rules require separate code bases and separate legal entities.
Everyone else believes that HFT exists because it's profitable, and that it subtracts value from the system.
You can choose who you want to believe.
Market making has always been work so it's always cost money. But now it's cheaper. There's always been money subtracted from the system, but now it's less than it used to be.
You get back an acknowledgement with each fill. There are risk checks in place that should check whether the order was a take or add liquidity fill (as per rule 15c3-5 amongst others)
This is a risk check that should have been handled by a component which appears not to exist. They could lose FINRA and SEC authorization if it is as bad as it sounds ...
Isn't it really odd that a company called "Knight" makes all these nonsensical trades just a couple weeks after the movie where a similar thing bankrupted a company?
Looks like someone's HFT system backfired. I'm looking forward to the Nanex analysis of this one.
Nanex on the 2010/5/6 'Flash Crash':
Nanex has done some great work. To date, afaik, they're the only ones to try and figure out the Facebook IPO trainwreck from the technical side. And it's a doozy.
As a result, they were pushing stocks higher and higher which causes a huge incentive for a system like Knight Capital have. Again, this explains not only some volume surge which was seen on the market but also massive moves in certain stocks like China Cord Blood Corporation (CO)  who rose several hundred percent, until someone finally stepped in & the important thing here to note that there isn't a reason under the current SEC order cancellation methodology to bail out Knight Capital and its algorithm.
Similarly, they were trading Exelon Corporation (EXC)  and were losing $0.15 on every single trade and when its being done 2.4k times per minute, its very easy to lose money.
Take a look at the Vice documentary on Tiger part harvesting. China states that tigers are endangered, yet there are these secret farms that breed thousands of them and the make tiger dick wine, all sorts of things with their other parts as well.
I wouldn't think any biological products being produced in china on the level.
They're calling it a "glitch" to suggest something unavoidable, a problem that could have happened to anyone.
It's same vein of bullshit we heard in 2008 "We couldn't possibly have known!" and more recently with the string of "rogue traders".
In every case, its just a bunch of folks who don't want to take responsibility for their gambles when they lose.
They gambled on an algorithm and everything related to engineering and operating it.
On some level or another they failed spectacularly and they're calling it a "glitch" - a minor malfunction, a transient error, a spurious little blip.
$440 million dollars pissed away over the course of 45 minutes is not a glitch by any stretch of the imagination.
It's a failure of epic proportions at multiple levels. Failure to test, to review, to react and finally failure to own up.
Market makers gamble on their ability to set spreads that will produce a profit.
They're using code to generate spreads, execute head fakes and stuff quotes to their advantage. They're "players" as much as anyone.
Incidents like this make me think that I'd have a panic attack with every commit, if I worked in the financial industry.
You might be far less likely to blow up the way Knight did, but you're hardly immune.
I try to build global failsafes into my code, particularly network code. For instance, a recent Twitter project I wrote has various loops calling their API. That code has a global counter that says "this program can only ever make 25 calls" that's enforced in the bottom level HTTP calling function. If the rest of my code is correct I'll never trip that limit, I have application-specific logic that means I should only ever need to make 10 calls. But I appreciate having the failsafe in case that more complex logic fails.
A heuristic like that would not allow Knight to function. Knight is an electronic market-maker (one of the biggest) and is probably making markets in thousands of symbols and likely updates those quotes several times a second. Their normal quoting rate is on the order of millions of messages/minute and in volatile times may spike 10x or more and still be within 'normal' limits.
Until they blow up one day themselves...
Call me cynical, but no type system, no matter how sophisticated, nor team of programmers, no matter how talented & experienced, is immune to this type of thing.
Anyway, I don't think they'll blow themselves up big, like Dark Knight Capital did. First, this specific kind of thing doesn't happen often (where one firm just messed up so much that it makes world news). Second, it's clear from Jane Street's videos that they're pretty serious about quality. If I were going to bet on which market maker will screw up, it would be somebody else.
In 2003, I was using a trade API that gave me confirmation about canceling an order at 10am. And then executed the same order at 1pm; Apparently, even though it did confirm cancellation, the order wasn't canceled. No matter how well your code is equipped against that - it's a legal problem, not a technical one.
Similarly, when the exchange retroactively cancels ("busts") trades after a flash crash, with a rather arbitrary rule (e.g. all trades between 3:45pm and 3:54pm in AAPL with price < $500) - you're left holding the bag, regardless of how good your code is.
The only winning moves in these cases is not to play - unfortunately, you can only tell retroactively.