Hacker News new | comments | ask | show | jobs | submit login
Trading Program Ran Amok, With No ‘Off’ Switch (nytimes.com)
143 points by mindblink on Aug 4, 2012 | hide | past | web | favorite | 92 comments

'Prediction is difficult, especially with regard to the future'.

It looks like they were using some new algorithm, which should have made them a lot of money, had the market gone up after their massive purchases. In that case, they would have pocketed fat bonuses and would not be on the news.

However, it has not happened, so the crying and the search for a scapegoat is on. It sounds like the case of the banking business as usual: 'heads I win, tails you lose'.

Ultimately, there is a really serious problem with the concept of limited personal liability for companies engaging in speculation. It is an assymetric arrangement, whereby the directors are entitled to the profits but are never personally responsible for the losses. With such rules of the game, it is advantageous to take crazy risks. Expect to see a lot more of this and many more taxpayer funded bailouts.

Huh? Knight Capital lost a bunch of money, and will likely go bankrupt. Essentially, Knight's software bug transferred a bunch of money from Knight to everyone else. This poses minimal systematic risk to anyone else, and they will almost certainly get no bailout.

The markets have already recovered. The S&P was down a little bit on thurs and recovered by friday. Knight is down 60%.


This is ultimately a situation of the market being a robust and stable dynamical system.

> This is ultimately a situation of the market being a robust and stable dynamical system.

Not while it's being shaped by algorithms competing against each other, which you're a part of.

Yes, no risk to others unless you happen to be one of their newly acquired futures broker unit's customers with $411 million in deposits. http://www.reuters.com/article/2012/08/02/knightcapital-regu...

Did you read the article you just linked to? Penson's customers have their money in accounts completely segregated from Knight's electronic trading division.

"It isn't like we found out that Knight was stealing money," Sommers [a CFTC commissioner] said.

The CFTC is just watching carefully to make sure it stays that way.

Actually, don't get me wrong. I think HFT is great (and I thought your HFT Apologist series was excellent). It reduces spreads greatly. There is nothing intrinsically wrong with prop firms, market makers, etc. But it is also an increasingly complex system and risk is part of the business. My meta-point (certainly not particular to Knight) is I don't think we understand all the consequences or the risks yet stemming from the fairly saturated and very competitive business, and the continuing arms race. There is also that unpredictable and fallible human component and how it reacts or affects the automated agents. There was even a paper on how as we approach zero, there may be brand new "relativistic" arbitrage opportunities: http://www.alexwg.org/publications/PhysRevE_82-056104.pdf. The Knight incident seems to me a reminder that there is much more to be done in risk management, more robust modeling, and defensive software development technologies in fintech. I don't know when the incentives will be there to invest in such things.

Yes, exactly, but it is still risk. They certainly haven't lost their money yet. If everything works out right, they shouldn't have to. But the financial world is much too complicated to say that there is minimal risk for any party, even if everything is in Treasury bonds.

Moreover, I would suggest you read Johnson et al's research on mini-flash crashes (http://arxiv.org/pdf/1202.1448.pdf) if you hold that the markets are stable dynamical systems.

How does this research suggest the market is unstable? According to these authors, the market was (in their view) dangerously perturbed 18,520 times, more than once per day. In spite of that, it remained stable. The flash crash took an afternoon to recover from. Knight took a day.

If you perturb a system over and over and each time it quickly swings back to equilibrium, that's pretty strong evidence it is stable.

They weren't taking long-term speculative positions, their short-term market making had a major bug in which instead of buying low and selling high it was doing the opposite (thus losing a small amount on each pair of trades). For more details read Nanex's blog (http://www.nanex.net/aqck2/3522.html).

So, they launched a busted market making algorithm, lost a ton of money and no one is going to bail them out.

And the nanex guess at why it happened: http://www.nanex.net/aqck2/3525.html

"We think the two periods of time when there was a sudden drop in trading (9:48 and 9:52) are when they restarted the system. Once it came back, the Tester, being part of the package, fired up too and proceeded to continue ..."


Worse dumb mistake I've ever made was to accidentally send a test email to several thousand live customers instead of the test accounts. That was a sinking feeling.

But creating a bug that loses your company half a billion dollars in thirty minutes and bankrupts them, must be stomach-churning.

Much, much worse than the bug in the market making algorithms was the design failure of not having some out-of-band mechanism to kill order traffic (or of said mechanism's failure to be tested adequately).

This is a risk management failure much more than a programming error.

I love dumb coders!

They were not taking big positions expecting the market to go up, they appear to largely have been burning money buying and selling fast.

There is no government money involved.

Of course, buying high and selling low is always the 'reason' for making a loss. In this case, I think the program caught itself out by manipulating the market, which it perhaps naively assumed to be non-manipulable.

In other words, it was creating so much volume that, when buying (or selling), it made the market go up (or down). It was then reading the price as going up (or down) and jumping on its own bandwagon. This, of itself, would create growing oscillations in the market and growing losses.

For this to work for you, you need to first create a trend and then sit back and let the suckers pile in on it and take the losses. You then return only when you want to reverse the trend again, at a profitable level (for you). I suspect the program was just too fast for its own good and not a match for the human Masters of this art.

That's not what happened, according to nanex.

It just kept making markets in reverse (instead of joining the bid and the offer, it bid on the offer and offered on the bid).

Nanex speculates that Knight ran their tester software on the real market (the tester losing money on purpose to the main algorithm). Alternatively, it could simply be a bug that sends a bid instead of an offer and vice versa. One bit flip at the wrong place could cause that.

Guess their developers never watched "Trading Places"....

"Wilson!!! Get back in there and SELLLLLLLL"

What I wonder, following this story this week, is how the software quality controls at a place like Knight compare with those for life-critical systems like those in, e.g., aviation.

On one hand, you'd think the QA in finance would be pretty solid, considering that the survival of the company could be at stake (witness Knight). On the other hand, I have a feeling that even there, people just don't take it that seriously.

Would love to hear from anyone with more experience writing software for these industries.

I have experience in HFT, there are similarities to market making and I have plenty of colleagues who've worked in market making. Just like any company the culture is largely dependent on those in charge. Founders of these companies fall into three buckets - traders, techies, and mathematicians/physicists - and quality control will generally be a function of the founder mix. Mostly techies: strong software culture, unit testing, realtime monitoring. Mostly scientists: strong algos, software that works, monitoring varies. Mostly traders: risk v reward is the driver, software quality is unimportant unless it affects short-term profit, monitoring is unimportant - dollars and cents are sufficient.

Obviously every firm's goals are driven by the goals of those in control. In the case of Knight they are largely a trader driven firm that has arrived late to the algo party. They were looking to get ahead by being one of the first market makers on the NYSE's new retail order matching system and probably cut some corners to get there. From a risk v reward perspective it probably looked like a good bet - with no major competitors customers would flood in and any bugs could be ironed out in live. Unfortunately the 'fat tail' (http://en.wikipedia.org/wiki/Fat_tail) struck and it may have sunk their company.

For a closer look at what went wrong see http://www.zerohedge.com/news/what-happens-when-hft-algo-goe...

Thanks for link. Given your experience, do you have any thoughts about a small circuit-breaker on every security that trips for, say any 3-standard deviation event?

In a quick Google search, it seems circuit breakers do not kick in for first 15 minutes of trading: http://www.nytimes.com/2012/08/02/business/unusual-volume-ro...

In derivative markets such as futures there are predefined price and volume limits (in the jargon 'limit up' and 'limit down' - ie: max up and down movements.) These limits exist primarily to prevent market manipulation or cornering a market (buying the entire supply of a commodity) but have also been triggered by trading around events such as the Japan earthquake. Doing a quick search it seems that there are plans to trial these controls on equity markets (http://blogs.law.harvard.edu/corpgov/2012/06/13/limit-up-lim...)

The problem here though was that while some stocks had dramatic price movements that might have triggered a limit control, more heavily traded stocks were able to absorb the additional volume and the price did not move significantly. Knight were not doing anything outside of normal bands, they were buying normal volumes of stocks close to the current market price and selling close to the current price. What they were doing was illogical in a profit sense because they were buying at a high price and selling at a lower price and thus immediately losing money. I think it would be very difficult for an exchange to trap this kind of problem.

In all the issue of determining 'normal' trading is very difficult as markets tend to be much noisier than you might expect. The majority of trading occurs near the open and closes of major markets (Hong Kong, London, New York) or data releases (eg US Unemployment) so large spikes in volume and price are a regular occurrence. In equities this is even more difficult as smaller stocks will tend to be more volatile and profit reporting season increases this volatility even further. Markets are ruled by fear and greed, and falsely triggering a limit may cause larger issues than it solves.

Perhaps one way to mitigate this is by wrapping the calls to another system such that all trading is supervised.

Another technique is to provide API keys in the wrapper so that test programs will not have the keys to a live system.

The real problem is that the risk of these test systems have not been sufficiently identified or recognized. We are all too busy creating mock systems instead of devoting sufficient oversight to the development of test software.

Good insight.

Made me wonder what would happen to volume if markets were open 24/7...but I'll leave that thought for another day.

Unlike high-frequency trading, aviation is highly regulated. In the United States the FAA specifies pretty detailed development standards for avionics software (e.g., DO-178B: http://en.wikipedia.org/wiki/DO-178B). We're unlikely to see similarly strict requirements for financial software anytime soon.

Having talked with people who write life-critical code, the regulation isn't really what makes it safe. Safety comes from good engineering.

The regulation just makes it much harder to bring an unsafe product to market, and makes it clearer who to blame when people die.

But don't you think the existence of regulation influences the culture?

Penalties influence the culture. As we all know, the first lesson in economics is that incentives matter.

Sometimes, the right people aren't being incentivized to do the right thing.

much harder to bring an unsafe product to market I would assume that's the point of the regulation in the first place? Nothing guarantees great software, but say requiring companies to pay for independent 3rd party testing adds significant barriers.

I don't blame anybody who thinks it's not as regulated as it should be but you might be surprised at how extensive the current requirements are. Storing ~200 million daily transactions in order to fulfill arbitrarily complex reporting specifications from SEC, FINRA, internal audit (rogue trading detection, etc.), not to mention analytics for other trading systems... Oracle/IBM/whatever enterprise big data provider gets the contract must be sending one gigantic fruit basket around Christmas time.

EDIT: Here's a little light reading for you http://finra.complinet.com/en/display/display_main.html?rbid... (or you could just take sandpaper and rub it against your brain)

I can't find the links at the moment, but some HFT guys have responded to relevant Slashdot articles over the years.

From what I remember from those accounts, traders often like to tweak algorithms at the last second and there is little to no QA before changes gets pushed live.

It's sort of a byproduct of the necessity for speed. Algorithms are quite complex, and even running a basic test suite that takes a few minutes may be deemed unnecessary.

> considering that the survival of the company could be at stake

It's at stake either way.

QA is not risk free. Time spent in QA is opportunity cost lost. Many things are very time to market sensitive. One must balance "perfect" against "shipped".

This is even more true in (electronic) financial industry.

I can tell first-hand that it depends on the places: some have fairly clean tests including fuzzy testing on trading automation, while other have, well, "fix it when it fails in production".

They lost $440 million (and amount greater than their market cap), and possibly the company, on what the world knows to be incompetence.

At some point if I couldn’t stop it - I’d be tempted to just kill the power to the server rooms, all of them. There just has to be a way to cut your losses.

I'd love to know what qualifies you to throw a word like incompetence around here. My best guess is the reason it took 45 minutes to shut it off was due to a judgement call: burn through free cash, or take out all their customers too. Bear in mind some of the largest retail brokerages in the world hang off Knight.

Their primary functions are acting as an order destination and a market-maker, for efficiency's sake an obvious conclusion would be that both functions are combined in the same software (in a market where microseconds matter). So given the choice of taking a cash hit (a potentially short term affair), or a reputation hit (a much longer term and most likely fatal affair), it's entirely possible Knight knowingly made the right decision.

It's worth note that the eventual deficit amounts to somewhere in the region of one year's net income, hardly insurmountable (and how many investment opportunities promise close to 100% return in a single year?).

Listening to the CEO on Bloomberg, it was clear that minimizing damage to customers was their primary goal (he made this point several times in the 5 minute interview), and that he appeared comfortable with the outcome.

$440 million is four times their 2011 net income. I doubt the CEO is comfortable with the outcome of being on the brink of bankruptcy. He is trying to arrange a fire sale of the company as we speak.

But I think you are right that they tried to avoid an outage. The incompetence, if any, is that they apparently did not know how much money they were losing and still kept the system going. It wasn't a caclulated risk but rather an incaculable one.

I imagine it's not easy to know how much you're losing at any moment in time. They certainly knew they were building huge positions, but knowing how much they were going to lose on those positions requires an estimate of the price at which the positions can be closed (or a hedge).

What I cannot imagine is that it is common practice to leave this kind of decision to an individual's judgement call. There have to be rules for a situation like this. And there's only one sensible rule for a rogue algo racking up unknowable losses. Kill it and deal with the consequences later. Anything else is negligent.

They had >= $300m free cash as of June, that's how I arrived at the $100mish deficit.

I could buy that they competently made the best of a bad situation. But I have a hard time believing that getting into the situation was the result of perfect competence.

My time writing trading software was never on the automated end of things, so I'm only modestly qualified to comment. But if I were doing the post-mortem on this one, the first thing I'd look for is middle management time pressure forcing a large release without adequate testing. And my standard for "adequate testing" would be pretty high.

If you're going to release something that can take down the company, it's worth making sure it works. In this case, they lost circa 400x the lifetime median income of a US worker. It's hard to imagine the upside that would have justified that kind of risk.

> In this case, they lost circa 400x the lifetime median income of a US worker.

Why is that relevant?

> It's hard to imagine the upside that would have justified that kind of risk.

Actually, it's easy to imagine such an upside. Consider 800x the lifetime median income of a US worker.

Solyndra lost far more of the US taxpayer's money. Are you really suggesting that Solyndra shouldn't have been considered because the amount of money was too large?

How about CA's high speed rail project? Are you really saying that it's a bad idea just because of the amount of money involved?

I'm not claiming that Solyndra or high-speed rail are good investments, I'm just them to demonstrate that the $500M at risk isn't a show stopper. You must consider the return.

There are lots of bets that are that large or larger. For example, every time a company sells for >$400M ....

> Why is that relevant?

Because it means that there's no "we couldn't afford to do it right" excuse.

> Actually, it's easy to imagine such an upside. Consider 800x the lifetime median income of a US worker.

Double or nothing on a company that size is a stupid bet.

> Because it means that there's no "we couldn't afford to do it right" excuse.

No, 400x the average lifetime salary of a worker doesn't mean that.

> Double or nothing on a company that size is a stupid bet.

Wrong again. Double-or-nothing is often an extremely good bet. You're ignoring odds of each outcome.

Then again, you're just spewing soundbites and getting details right doesn't help with that.

> I'd love to know what qualifies you to throw a word like incompetence around here.

I'm not sure what other word you could use. Total stuff maybe?

> So given the choice of taking a cash hit (a potentially short term affair), or a reputation hit (a much longer term and most likely fatal affair), it's entirely possible Knight knowingly made the right decision.

Except thanks to their competence they have taken a massive cash hit and their reputation lies in tatters.

I suspect the damage to their reputation is so bad they will be lucky to survive.

> their reputation lies in tatters

Most of the brokerages who routed elsewhere last week were back using them as of Friday.

Your right, there are so many details we dont see beyond the attention grabbing headline were everybody will naturaly go and instantly think `but all systems have a off switch, you pull the ruddy plug out` and with such headlines I'll bet most people were thinking that before even reading the article. This and how loved people in any form of finance, with HFT being one of the most loved area's. But with the blame culture in many area's of life, there will be investigations and somebody will either chuck themselfs onto a sword or somebody will be blamed and made a scapegoat. It is all down to confidence at the company from now and nomatter if they did the right things or the wrong things it is that alone that will dicate the fallout. Sad in many ways. RIM being a classic case were the media have controled the stock price, which has controled the media that control the consumers that just ends up in a deadly spiral of self forfilling doom and gloom, nomatter how well or badly they actualy do. I'm sure there are better examples but as you said, its down to company image. Nothing to do with anything else and that alone is the most important thing as a years income is not a problem if your around for more years, if it looks like you wont then it starts to spiral badly.

I'm sure they would have tried pushing code in that time at least a few times.

Agreed, the incompetence of pushing changes like that to production (with such high stakes) is pretty bad... but not even having a switch to flip?

The server rooms might not be nearby. They might be located closer to the big exchanges' servers.

Very likely. Speed of light matters in these situations.

One nitpick: You can't "kill the power" on a market-making algorithm co-located with the exchange somewhere in New Jersey.

It happened very quickly, after years of operations.

Technically, it can take seconds to lose that much.

What they did wrong is do all their trading with the same algorithm. Way to put all eggs in one basket.

According to the article, it took them a half hour to shut it down. That's more than a few seconds.

But as a torrent of faulty trades spewed Wednesday morning from a Knight Capital Group trading program, no one at the firm managed to stop it for more than a half-hour.

It wasn't after years of operation. The article states that the system was brand new.

I'm pretty sure they don't do all their trading with one algorithm. What makes you think they do?

So, some of the owners were looking for a way out, and magically this thing broke loose and started giving away (basically) free money to undisclosed receipients. In the meantime all the technicians were fast asleep and couldnt kick the machines down or something, while they were losing milions of dollars per minute. This article is a completely honest recap by completely honest people, about completely honest traders/bankers (bankers are not people).

Edit: on a COMPLETELY unrelated note, trading firms/banks are known to actively pursue the extraction of money from their clients with bogus trades/advice http://www.nytimes.com/2012/03/14/opinion/why-i-am-leaving-g...

I would rarely suggest this, but if something is so incredibly broken that you're loosing money at a rate of 800 million dollars per hour, screw the customers.

Turn it off at any cost. If you are forthcoming and transparent, customers will understand.

Point is you dont know what the loss is.

1) They bought too much stock (incorrectly)

2) realized WTF, stopped everything

2a) more likely their clients said WTF is wrong first

3) had to sell the stock for the rest of the day.

Its only after they sold everything did the $440MM price tag surface. Hopefully they sold most of their positions to goldman (instead free market) so one of their investors made a boatload of cash.. giving them favorable terms for a line of credit.

This is wrong. The algorithm was buying and selling constantly, sometimes losing small amounts of money (usually about $15) each time, sometimes as often as 20-40 times per second for each of about 150 symbols.

Another thread here has a link to a blog post with (admittedly speculative) evidence that, at two different times, they tried rebooting the system only to have it come back up and start making random trades again.

There seems to be a lot of confusion around market making, brokering, execution algorithms and HFT in this thread.

Pure speculation. Maybe there was an off switch, which used to work, but not regularly tested, and silently broken? Wouldn't surprise me.

Highly doubt it.

Sounds highly likely!

This was apparently an infrastructure problem of some sort: http://www.bloomberg.com/video/tom-joyce-knight-is-open-for-...

Infrastructure changes can be notoriously difficult to back out by simply using an "off" switch, particularly if this was some type of a firmware upgrade that impacted all of their production servers. Backing it out at a minimum would require some type of a reboot, which would cause problems with an active trades. It could very well be that they were running an Active-Active environment, they had to go Active-Passive, back out the changes from the passive environment, reboot, and surgically cut over to the passive environment. This could easily take 30 minutes.

I read the nanex article. Regardless whether it is true or not, the general trend is towards development of more sophisticated load testing programs.

The most benign ones were developed for use in IT systems. E.g. Apache bench. While these can cause disruptions if aimed at production services, this does not necessarily threaten the health of an entire enterprise.

However, the trend is that all software sectors are starting to adopt this particular technique of testing software with not sufficient regard to what happens if it is released into live systems.

For example, we have chaos monkey, from Netflix, which randomly shutdown services in a cloud based system.

What would happen if software which simulated meltdown at a nuclear facility was accidentally bundled into the build system by a tired operator? Or some one does the same with flight software?

The main software running trading platforms would presumably be supervised by another program to ensure that bad algorithms do not lose e company too much money. However there was no such tool for the component that generated the test data.

To me, it sounds like the supervision should be done at a higher level, e.g. A wrapper around existing APIs. All software running against live systems must call into the wrapper.

Secondly, test software should conduct some kind of verification. E.g. Check for evidence that it is testing against a Test system. This might be the presence of a nonexistent company, et c.

I am more than happy to compile any other ideas you may have so that the IT industry is able to build more fail safes into software.

We are starting to see some of these fail safes in practice. E.g. When you try to send out an email to everyone in the organization, email software may warn you if you are sure you want to do that. The problem is we haven't thought enough about these scenarios that we don't adequately address them.

Incidentally, over in Australia, the Commonwealth Bank suffered a major downtime when it's outsourcer HP accidentally pushed out system wide updates instead of doing this to select machines as originally intended.

Doesn't this mean that others made a killing, taking advantage of all the mispriced orders?

Most likely. c.f. the recent JPMorgan losses[1].

[1] http://www.pbs.org/newshour/businessdesk/2012/06/who-benefit...

"Knight is also working with Goldman Sachs to help unwind the trades behind its extensive loss, according to people briefed on the matter.

"Goldman has agreed to buy, at a discount, the shares that the trading firm had accumulated. Such a move would help Knight by taking the portfolio off its hands and freeing up capital."

What does this mean? Why would GS do this? Why would Knight do this? Couldn't they just sell them on the open market at a better price instead?

Yes and no. Given the kind of volume Knight purchased during the faulty trades, it could be difficult to offload that many shares on the open market in a timely manner. Maybe those stocks are hot Monday, maybe they're not. Knight needs capital yesterday to keep floating, so they're likely looking to sell everything in one basket.

As for GS's motivation, they're buying at a discount. Due to the time sensitive nature of Knight's predicament, they're probably trading the portfolio to Goldman at a reduced rate. Unlike Knight, Goldman has the cash to sit on it for a while and sell the shares directly out into the open market, even if it takes a few days. Given the discount they bought the shares at, they're likely selling with a decent margin.

So GS is primarily the go-to since they're willing to leisurely unload stock they bought at a discount and they've got the cash in hand to do so. Makes sense.

Would be interesting if Goldman also made a stack off the original trades.

Strange article. Lots of text but missing the main thing I was looking for. What kind "erroneus trades"? where did the money go? If you buy stock at the market you did not intend to buy, why not just sell them the next day?

You can get more technical details from nanex: http://www.nanex.net/aqck2/3522.html

Essentially, they were buying high and selling low. Many times a second.

Seems like maybe they couldn't hold on to the stock for long enough to unload the enormous volume they were dealing with. It sounded like at one point they were doing AS MUCH VOLUME AS EVERYONE ELSE on the exchange combined.


Since there's 2 parties to every trade doesn't that make 50% the limit?

Nope, sometimes they bought stock they sold themselves!

If you sell stock to yourself... why even bother to go through the exchange? Why would the exchange even allow such trades?

Knight has different programs running. It was handling >10% of all NYSE, so it must have been running a lot of servers. When the berserk algorithm wanted to buy or sell a stock where Knight was the only "market-maker", another Knight server would usually intercept the order after it had been posted on the exchange. Here's one way it might have happened: http://www.nanex.net/aqck2/3525.html

Trading volume is number of shares traded, not number of trades. Re-reading, it was actually ~600% greater volume.

"The difference reached a peak at 9:58 a.m., when the volume was six times greater."

That's pretty noticable!

Because the program sold the stock it bought immediately, at a loss. Leaving you with nothing to sell the next day.

Why isn't automated high-frequency trading banned already?

Does it not go directly against the spirit and purpose of having a stock market with proper investors?

this is issue nothing to do with speculative trading. Knight is a broker. They provide an interface to the market for retail brokers, spread betting outfits and so on. Their algos execute orders placed by their clients. Their new algo had a bug. That's it.

I don't know the details of retail brokers vs HFT, but this writeup http://www.nanex.net/aqck2/3522.html has a lot of charts showing trades with 25 millisecond intervals.

Just looking at things in a big perspective, the fact that the system is designed for allowing trades at such frequencies makes it seem like markets these days no longer exist for the benefit of the listed companies.

Then again, maybe I don't know wtf I'm talking about :-S I guess I don't understand how real value can be created from such a system.

Check out this TED talk about how these algos. are actively changing the surface of the planet:


The speaker makes an arguement that no one can understand how these algos. interact and are being studied like natural phenomena.

Christ Stucchio wrote a HFT apology: http://www.chrisstucchio.com/blog/2012/hft_apology.html

they have hundreds if not thousands of clients some of who in turn each have thousands of customers... so yeah they will be executing a high throughput. And often large orders get broken up into smaller pieces so as to minimised market impact.

Can someone smarter than me please explain why a system where trades got matched / executed at a granularity of once per second or once per several seconds wouldn't work? What would be the problem with exchanges accumulating and keeping secret buy and sell orders and executing them at a reasonable interval?

I wonder if the lack of a kill switch was linked to the power black outs in India - what if they set it going, got cut (power) and then only came back online 45 mins later? Could be that just one link - say a local exchange or power for an FTTP line failed.

I don't believe the problem was caused by "bug" - this reminds me of "rouge trader" stories.

It is kinda weird that all problems in Wall Street are caused by "bugs in software" or "rouge traders": while executives are never hold accountable.

This is an extreme example of what what can happen when what should be a software company thinks it is some other sort of company. I am sure they thought they were a trading company and software development was the necessary evil required to get things done.

Well 400 million odd dollars in the red later I doubt they still feel that.

I do feel sorry for them and they probably didn't deserve this huge loss. Hopefully valuable lessons can be learned.

Can anyone provide some context on this matter? What happened wednesday and where?


Basically, they deployed a new HFT algo and it started buying high and selling low. oops!

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact