
London Stock Exchange smashes world record trade speed with Linux - dreemteem
http://www.computerworlduk.com/news/networking/3244936/london-stock-exchange-smashes-world-record-trade-speed-with-linux/?cmpid=sbycombinatorrplant
======
whatajoke
Is this the same system that replaced the failed Microsoft platform?
[http://blogs.computerworld.com/london_stock_exchange_to_aban...](http://blogs.computerworld.com/london_stock_exchange_to_abandon_failed_windows_platform)

Microsoft had run huge ads on how LSE was using .NET and Windows in critical
financial applications. Within a year LSE had suffered crashes in the same
critical areas.

~~~
JoachimSchipper
I believe so, yes.

In fairness, though, it may not have been MS' fault - crappy programming is
crappy, no matter the underlying OS. And Linux fanboys should wait to see if
the new system has roll-out issues...

~~~
nodata
If I remember correctly, the LSE brought in Microsoft themselves to fix the
problems. And Microsoft couldn't.

~~~
dhyasama
Source?

------
lrm242
World record? Not quite. Let's see full details on the numbers. NASDAQ
publishes their latency numbers weekly, including average, 99th percentile,
and 99.9th percentile: <http://www.nasdaqtrader.com/trader.aspx?id=inet>.
Their average latency on order entry is 98 microseconds and their average
latency on market data is 92 microseconds. Perhaps by "world record" they mean
"european record".

~~~
danielnicollet
True but this is like the "World Series" for stock trading latency. Who really
plays baseball other than the US and small group of other countries like Cuba
;-)

------
nervechannel
Normally I'd thoroughly welcome any news of large Linux-based systems
replacing slower MS-based ones, but in this case...

... these days I can't help thinking "is faster _really_ a good idea?"

~~~
rimantas
My first thought after reading the title was: "Oh great, now they can trade
faster than they can think". And then…

~~~
rbanffy
> now they can trade faster than they can think

They already do that. Now they are able to trade faster than their machines
can think.

------
siculars
If I were a critical systems operator like say, a stock exchange, I would have
to have my head examined by a team of head examiners for even considering
Windows as an acceptable OS platform.

Honestly, is the reason Microsoft can even get consideration for these
projects because there are so many tech executives at major corporations who
used to be Microsofties?

~~~
wglb
Have you experience with Windows Server in a properly set up production
environment? I know of at least one high-performance shop that has used
Windows servers (we are not talking onesies or twosies here--more than
hundreds) and has higher performance requirements than exchanges.

~~~
AdamTReineke
What industry has higher performance requirements than exchanges?

~~~
endtime
The only ones I can think of are space, military, and maybe medical robotics.

------
highlander
It's fascinating that these latencies are an order of magnitude faster than
most 'web services'. Can anyone provide more insight into what is specifically
involved in executing a single trade? Are they just enqueueing a trade message
in this latency or actually executing the trade? Are they including network
latency in these numbers? Also, does anyone have any insights into the
hardware and customizations?

~~~
Rimpinths
I'm a software developer at an exchange in the US. The LSE has not released
details of how it is measuring "latency", but FWIW, we measure latency as the
time it takes to submit an order and receive an order acknowledgment or fill,
with both the sending and receiving outside our firewall. The two biggest SW
components involved are the FIX handler (the component used to process orders;
FIX is an industry standard protocol) and the matching engine (the component
used to match buy and sell orders). I could of course spend hours talking
about our hardware and software customizations, but it's a very competitive
industry. Our software is C++ running on Linux, and that's about as much
detail as I'm willing to go into.

~~~
_grrr
"we measure latency as the time it takes to submit an order and receive an
order acknowledgment or fill, with both the sending and receiving outside our
firewall." This isn't normally the metric exchanges are referring to when they
say 'latency'. They mean the internal processing time, within their network,
to process, fill and generate a response to an incoming order. Anything else
depends on the individual clients connection, some may be co-located, some may
not etc etc... exactly as lrm242 describes.

~~~
Rimpinths
I thought what lrm242 said and what I said were pretty much the same thing.
But there is no standard definition of 'latency', so it's hard to say what
"they mean" unless they publish a definition on their website. But I know of
at least three major exchanges (one of which I work for) that consider the
starting and endpoints for latency measurements to be outside the firewall. As
for what Turquoise means, I'm not sure but would love to see their definition
to know if their numbers are comparable. Also would like to know if these
measurements were made under load because that can also make a significant
difference.

------
ilitirit
This information is relatively useless to me. Was the old system slower
because of the OS, .NET, or some other factor(s)?

~~~
alphabeat
I believe it was the OS from memory. This was around the time Accenture lead
development wasn't working out a few years ago.

~~~
DrJokepu
I would be really surprised if it was the fault of the OS. I mean, Windows can
be really fast, in the right hands, especially since they've introduced
HTTP.SYS (parts of the HTTP server now run in kernel level). However, .NET
being a garbage-collected, managed and JITted environemnt, it is maybe not the
best tool for achieving these kind of response times reliably. Any serious and
competent .NET developer would admit that.

In general, I would blame the incompetence of Accenture for the failure: bad
decisions, bad architecture, bad management, using the wrong tools, crappy
code.

~~~
rbanffy
> especially since they've introduced HTTP.SYS (parts of the HTTP server now
> run in kernel level).

Isn't it something Red Hat did a long time ago that was widely regarded as a
Very Bad Idea?

And why would you use HTTP in this scenario anyway?

~~~
DrJokepu
I don't know why it didn't work out for Red Hat but it works quite well on
Windows. And honestly I don't know whether they use(d) HTTP or not, but given
that HTTP is dead simple and as lightweight as a protocol could get, maybe it
wouldn't be such a bad idea to use it for exchanging messages in this
scenario.

------
nozepas
I'm sure they will notice much more benefits than just improved trading times.

For example: improved infraestructure management, overall performance
increase, fast patches for security bugs, and a long etcetera.

Of course, as JoachimSchipper says, if the applications on top of the OS ar
crap, a different OS won't solve the problem, but to have a good underlying OS
is a good start.

~~~
_grrr
And, due the the lower latency, they become a more attractive source of
liquidity generating more incoming orders and more revenue.

------
ashish01
This makes me wonder how, in mission critical and real time systems like LSE,
they maintain a balance between being ACID and fast. Does any of these
transactions ever touch the disk ? If not, how do they handle machine failures
?

~~~
gxti
No, the core of a stock exchange resides entirely in RAM. Everything is reset
once a day (overnight for stocks, immediately after regular hours for
futures), and brokerages are responsible for resubmitting long-lived "good
'til cancelled" orders each day before the session begins.

Despite being RAM bound it's easy to parallelize because each stock is its own
isolated exchange, so they can be distributed across hardware in proportion to
the average volume in each stock. In other words, it's localized but
embarrassingly parallel.

The only traditional databases are for reporting purposes e.g. who traded with
whom and are append-only as far as the core exchange code is concerned. The
reporting database is coupled through the same firehose feed that you can get
as an exchange member, albeit with more redundancy. There are also access
control systems that only come into play when you first open a session, and
lots of other moving parts each with their own task. For example, if you lose
your connection to the exchange you can ask a certain (non-core) server to
replay all the events that happened from a given point in time in order to
catch up. These, too, would be listening to the main event stream and spooling
to disk, but the central exchange processes are RAM-only.

How do they handle machine failures? That's harder to speculate on from the
outside, but if I were them I'd be running the same exchange in parallel on
2-3 machines. I don't know how it could be done without serializing the
incoming order flow to make sure that each machine sees the same order of
events, but it's not impossible.

------
motters
Encouraging ever faster trading times might be unwise, leading to instability
and "hunting". There needs to be some damping in the system to guard against
large uncontrolled fluctuations.

~~~
eru
If damping is profitable, someone will do it.

~~~
motters
Well it's a complicated system. Whilst individual traders might benefit from
faster trading if you consider the health of the system as a whole, which
favours steady predictable controlled growth, faster trading may introduce
instabilities which result in reduced profitability.

~~~
eru
For a different perspective, there was a huge opportunity to make a killing
with damping during the recent flash-crash.

------
skbohra123
They meant GNU/Linux of course.

------
ergo98
They replaced one software package (a custom solution by Accenture) with
another (a company they bought, as an aside, for a solution that they now
sell: Ergo, take their claims with a grain of salt).

Drawing assumptions about underlying systems is speculative and somewhat
ignorant.

~~~
anthonys
Of note is that it cost less to acquire MilleniumIT then it did to run their
Accenture provided solution for 1 year.

