Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: I'm building an open-source, high-frequency trading system (scarcecapital.com)
301 points by zygomega on April 15, 2013 | hide | past | favorite | 190 comments

This is a nice little research project, and I hope you learn a lot from it. Having written algorithmic trading systems I think you are missing on a couple of central points: 1. There is no such thing as the "best" algorithmic trading platform, because algorithmic trading is such a broad term. Architectures that make sense for one class of trades does not make sense for other class of trades. 2. Contrary to popular opinion on this forum, algorithmic trading is not necessarily only low-latency and high-frequency. Frequency, latency, and algorithmic levels are really dials on the specification system and along with cost dictate how you will build the system. Do some work figuring out the trades you are targeting before setting those dials. 3. The actual execution system part of an algo trading system is usually the easiest part. If you've written one before, writing new ones is usually trivial. Finding the trade, building appropriate risk systems/practices, building back testing frameworks and exchange reference data systems are all much more challenging and take the majority of the time in these systems. 4. Finally, if you are truly targeting low-latency and high frequency events concurrency is your enemy, not your friend.

To everyone saying that the cost in these systems is the cost of co-locating servers or fpga cards, you're wrong. You can get hosting deals/leases on those kinds of things for the same cost as high-end web hosting. The cost of running these systems are 2 fold: 1 paying the employees (because your competitors can pay them a lot) and 2 having deep enough pockets to survive the bad days/weeks/months. These are the same costs that have always existed in the trading space and have nothing to do with electronic trading. In fact, electronic trading has lowered the information asymmetry and made it easier for new participants to be involved in the markets.

> ...truly targeting low-latency and high frequency events concurrency is your enemy, not your friend.

This reminds me of the LMAX financial trading platform where they started with a concurrent model but ended up using a single-thread "...that will process 6 million orders per second..."


Yes, but the concurrency has been shifted to the two disruptors at either end of the business process logic. And I suspect the business logic is fairly simple and lends itself to the straight through solution. If you want to react to a news event (say), you have to somehow send a signal to the feed requesting recent history (last 20 minutes say), have something waiting to look at it, crunch the numbers when they arrive to see if somethings up, then interupt the existing trade decision/risk management schedule with a potentially better trade. Complexity increases quickly and concurrency helps do all of that at high speed.

As your complexity curve increases, your latency expectations must go down. That is what I meant when I said there is no single trading system that can be the "best".

If what you are interested in is complex decision making then it may make sense to use a different sort of messaging technology than LMAX, but you won't be getting into anything remotely "low latency". Nothing wrong with that, just needs to be a known expectation.

Yep, I've poured over the technology and concepts there and its awesome. So awesome I think you can increase complexity without too much of a latency penalty. Just behind the fast guys but way ahead of the straight algo guys is where I think the opportunities lie (I'm kind of answering your previous question about where the dials sit in my mind). I might be wrong but, and would like to test out all dial positionings at once before locking in. And that's where a flexible haskell version could shine.

I have no opinion about the choice of haskell. That said, you are already making decisions about the dials if you go in with an architecture that relies on massive concurrency, functional languages, etc. You cannot for instance get the latency dial very low with that central architecture decision.

Again there is nothing wrong with that. It's just better to state it up front as an expectation and/or a goal than to assume you are going to be able to pivot easily once you have an architecture in place.

I can't recommend working with the LMAX library enough if you are interested in low latency in the JVM. Lots of people have driven to the same/similar place but LMAX was the first to just let everyone see it.

Yes, it is interesting, when you are becoming highly concurrent the penalty for context switching start taking its toll, but that's only when the rest of your trading system is already optimized.

Thanks Kasey,

I'm probably thinking of algos more generally and generically. For example, whatever the class of trades, I think that what you are doing is making a stochastic forecast of future price movements. So maybe there's a good separation possible between your algorithms that create a distribution and algos that determine how to trade once you have that distribution. But saying this probably reveals a bias I have to think statistically. And I totally agree the majority of the system build thats important is the boring stuff. An algorithm should be able to tell if the market feed has died or the market shuts down unexpectedly, right? And know what to do next. And good to hear that someone thinks the costs are much less than popular opinion suggests.

I am curious about this project because I love this subject.

However, it is impossible to build an open source HFT system especially because it will be open source. The other sharks will know how you trade and they will only need to play against you by sending opposite orders every time. That is what happens when the source code of a big US bank were stolen. So it should be more appropriate to say it will be a fast trading system. The other reason is you can't build a perfect HFT trading system because it's a moving target. It is the war in the micro seconde market. The sharks will send you wrong orders just to disrupt your system or they will manipulate the market. And finally there is a hardware problem. To do a HFT trading system you need to rent your own severs inside the market, use FPGA cards, etc.

In conclusion, I only say there is a wrong goal to say "I will build the best open source HFT trading system" rather than "I will build the best open source fast trading system". This project is interesting (and there is Haskell cool!) so I will stay connected to see how it evolves.

Couldn't the infrastructure be open source while the specific trading algorithms kept closed source?

This could be a lot more fun than classic codewars.

For 99% of the "system" there's no particular reason you'd have to connect to the NYSE and trade real stocks for real money. Looks like they're feeding from iqfeed, but I think it would be huge fun to create an imaginary competition exchange, convince about 100 algo writers to compete, and shove them up against each other purely for the fun of it, see who's a better algo writer.

With modern virtualization, it should be pretty easy to distribute a pack of images, including practice data, to replicate an entire financial system on a small scale in your basement, then once you think you have a decent algo, upload your image to the competition league. If your hft codewars league wants limits for storage or cpu cycles, virtualization is a pretty easy way to implement it.

Superficially the most obvious thing to do is totally make everything up, from top to bottom, but it would probably work just as well to use real live market data and overlay on top of it. Just as air traffic control simulation has very slowly moved over the decades from purely random to sorta realistic to actual airport data.

This could be quite a bit of fun.

We did this in collaboration with UofC, using our Algo trading platform (http://www.optionscity.com/event/uchicago-midwest-trading-co...) and it was really fun.

Hm, I wonder if there is an interest to make this a more open event :)

>Hm, I wonder if there is an interest to make this a more open event :)

Yes. I bet there is.

I like your thinking V, and that's not too far away from one of our ideas. Take some algos that are representative of typical market makers and trader behaviour and fit the parameters so that the simulated trading starts to look like the live data. Maybe it's a better idea to use real algo makers.

i believe that is how it works: open source infrastructure, private algos

I intend to publish and collaborate on private algos. Even if others have the code they still don't know where I choose to operate and deploy capital. And if they get rich on the code good luck to them - maybe they'll sponsor a dedicated server for 1.0

Yes you send your code on the servers of a company and you use their API to hit the market with their servers, or you rent your own racks. It will cost you a lot but in the HFT world it is the only solution. Any nano second counts. You are the fastest or you are not.

Just curious... did you actually look at the project before writing all this?

If you need smart algorithms operating autonomously versus motivated enemies your go to must be open source. The security guys proved that a long time ago.

You're right in that the HF bit doesn't really fit but 'fast trading' doesn't have the same ring to it.

I don't think your security analogy applies to trading algorithms. Open source is good for security because it aids in the discovery of vulnerabilities. However, good trading algorithms are usually reliant on some form of information asymmetry. If you make a discovery and develop a trading algorithm to exploit it, I would think it would be advantageous to keep it a secret for as long as possible. This is not my field of expertise so please correct me if I'm misunderstanding your analogy.

Edit - I love the project by the way. As others commented already, there's a really strong argument to be made in favor of open sourcing the infrastructure side of this. And that appears to be where most of your effort is focused (e.g. open infrastructure, private trading algorithms).

The algo is only one part of a real trading organization. Granted it's the one that gets all of the credit, and is perceived as the sexy side but operations and reporting is easily as important. If you had the best algo, but you kept making mistakes or mis-reporting because your platform was crap, you'd be out of business as fast as if your algo didn't make money. I'm talking here of hedge funds where assets under management are all import and generally external. It might be a different story in a market maker or prop shop where getting research into production fast is all important.

Thanks Dan, the analogy is a stretch maybe, but most algorithms fail due to being narrow models of reality, and criticised by too few eyes. If I discover something, it's extremely unlikely that others will believe it anyway (unless I prove it by becoming filthy rich as a result in which case I'm just fine).

You're pretty right about good trading algorithms but that doesn't necessarily apply to good traders or good trading systems. A good system - hft or otherwise - needs to be flexible and evolve to stay in touch with an evolving ecology. And to do this, it needs to keep track of many different trading ideas and algorithms, each of which may have its day in the sun.

Have you considered python for the research/algo side of things?

I work in a hedge fund in London at the moment and there's a massive shift away from R towards python in the algorithmic shops that I know about here. There's also the great work that quantopian are doing on their backtesting framework zipline [1]

[1] https://github.com/quantopian/zipline

Yes, I am aware of that trend and considering a switch. I'm personally more used to R is all, but if I found collaborators I'd jump to where the code base goes.

The only other factor is I find R pretty aligned with haskell having a somewhat functional pedigree, so that code translates pretty nicely between a rapid hack at the problem to the more robust approach.

zygomega, I'm one of the zipline maintainers and we'd love to collaborate with you. There are a few people doing HFT research with zipline, and there's a lot of work to do. At quantopian (my day job), we focus on longer hold periods, so there is room in the zipline ecosystem for you to do HFT.

The main benefit we've found with python as the algo language is that it allows for stat programming with pandas, but also OO or functional programming for the algo logic. This smoothes the transition from research to production, just as you're describing with R -> haskell, but you can stay in one language.

I think one of the biggest potential wins with parallelization is if you can assume all positions are closed overnight, most often true for HFT. That way, you can simulate all the trading days in a test range in parallel. This is quite similar to the parallel processing we do to handle the large number of concurrent backtests running at quantopian. We did all of that with python, but I'd be fascinated to see it done with haskell.

I would see haskell very much as the plumbing side of things. The tools available for handling and reasoning about streams are streets ahead of anything I've seen elsewhere. With zeromq and protocol buffers (that's what we use in our stack) you could very nicely separate the plumbing of the data from its consumption. I'd love to see something like this as well!

How would you handle the position sizing part of the algo if you're testing all days in parallel? Wouldn't the trade size depend on the all of the previous day's PNL?

Hi fawce, I'm a big fan of quantopian but didn't realize that zipline was a separated project. Will have a good scout around the project and see what you guys are up to :)

I too would be interested in collaborating if you switched to python. In my ideal world it would be C++ backend, python frontend

Did you look at Julia[0]?

It is a Lisp, semantically, with a MATLAB-like syntax (including all the linear algebra sugar), and a LLVM back end.

The stated goal of the language is to give to scienctific programmers the convenience of high level languages for prototyping, yet the speed of low level ones. It has multiple dispatch, based on a powerful type system, homoiconicity and macros (optionally hygienic).

The package collection is growing steadily [1]. There's also a mailing list dedicated to doing statistics with Julia[2].

I looks like a good fit for your project.

[0] http://julialang.org

[1] http://docs.julialang.org/en/release-0.1/packages/packagelis...

[2] https://groups.google.com/forum/?fromgroups=#!forum/julia-st...

i've looked at how it compiles to llvm, and it basically punts any optimization to the llvm side, aside from the most basic tracing jit method monomorphization/specialization.

(as in, I spent part of an afternoon end of last week skimming the entire code base).

This is troubling, because when you don't design your language to make it "manageable" to do static analysis / optimization on, it be becomes DIFFICULT to add that capability later. Julia is about as dynamic as JS or Scheme or Lua, plus it has all those runtime type tag cases in code... but whereas JS has relatively unbounded manpower (relatively!) to make a fast JIT, whereas lua is a small enough language that we have the amazing impressive LuaJIT, whereas Scheme is designed with very thoughtful semantics / specifications in mind, Julia lacks.

more succintly, Julia lacks a clear enough thoughtful choice in static/dynamic semantics for the pre LLVM side to have an easy optimization story given a small sized core team, such optimization engineering will either take a long time to develop, or will require some commercial entity to sink serious capital into writing a good JIT / optimizer. LLVM is a good backend, but it can't leverage the semantics of the input language, just the semantics of the LLVM bitcode you provide. You really really can't punt on optimization on the before LLVM side.

also: Julia doesn't have a type system, it has a dynamic check system. (Show me the code for static type check in the julia code base as it exists on github and I'll owe you a beer :) )

Let me repeat: Julia doesn't have clear static semantics / phase distinction, and doesn't really seem to do anything beyond method specialization before passing code on to LLVM. This means it can't get compelling performance compared with any other LLVM backed language.

LLVM is amazing, and i use ghc -fllvm exclusively, and while theres some really really awesome optimizations that LLVM does for haskell code (for example), a HUGE amount of high level code performance is really depending on the language (any language!) doing optimizations before passing things onto the LLVM.

The notion that Julia is not thoughtfully designed, and therefore we won't have the resources to make it run fast, is close to being the opposite of the main idea of the project. We don't want to go to extreme lengths to get performance, so we leverage design.

For example, one of the biggest challenges in dynamic type inference is getting the types of fields of mutable heap-allocated objects. We avoid this debacle by letting you declare field types, which also has benefits for code clarity and specifying memory layout. Everybody wins. But once you are writing types, you need fairly flexible types, to avoid being stuck with only Int and Any (everybody's favorites). We then follow the implications of this as far as we can.

As for "all those runtime type tag cases", the key is that they form a lattice, which feeds nicely into dataflow analysis. The lattice-theoretic properties of a language's universe of objects typically do not get enough attention, especially in dynamic languages, where it is actually most needed. Typically the lattices are either trivial (for example, scheme's fixed set of types), or highly uncooperative. You want to hit a sweet spot where you can compute greatest lower bounds that are actually somewhat interesting.

I didn't look at the implementation, and would be out of my depth if I were to. I'll relay your concerns to the authors (I share some of them, actually), but, meanwhile, here are a my thoughts:

The fact is that the performance of Julia is already quite good. Julia is semantically much closer to C than Haskell is, and it needs fewer optimisations to get reasonable speed.

The necessity of static analysis for performance is a myth that's been debunked for a while now. From the language semantics, I think that the runtime model of Julia must be about as complex as Lua's, including the LuaJIT FFI (swap metatables with type-based multiple dispatch).

As you said, though, whereas LuaJIT has its own tailored bytecode, Julia compiles to LLVM IR, and potentially useful information is lost in the translation process. LLVM-Lua is reportedly 2-3x slower than LuaJIT. I suppose that type specialized traces and immutable types help somewhat.

An area that clearly needs improvement is vectorised code. Currently, an new vector/matrix is allocated for each intermediate step. Bad. This is being worked on.

> also: Julia doesn't have a type system, it has a dynamic check system. (Show me the code for static type check in the julia code base as it exists on github and I'll owe you a beer :) )

You're conflating type system with static type checking. Julia is dynamically typed, but its programming model (the UI of sorts) is built around the type system and multiple dispatch, it is one of its main features of the language, actually.

Thanks for checking Julia out. I fear there is a fair amount of misinformation in your comment, however, so it seems that a few things were lost in your reading of Julia's code (some of which is admittedly quite tricky). I hope you don't mind if I address some of it.

The first point is that Julia already has excellent performance on a par with most compiled languages, including, e.g. Haskell, whether they are using LLVM or not. Straight-forward Julia code is typically within a factor of two of C. That's shown in the microbenchmarks on Julia's web site [http://julialang.org/], but, of course, that's not entirely convincing because, you know, they're microbenchmarks and we wrote them. However, similar performance is consistently found in real-world applications by other people. You don't have to take my word for it – here's what Tim Holy [http://holylab.wustl.edu] has to say: https://groups.google.com/d/msg/julia-users/eQTYBxTnVEs/LDAv.... Iain Dunning and Miles Lubin also found it to be well within a factor of 2 of highly optimized C++ code when implementing realistic linear programming codes in pure Julia: https://github.com/JuliaLang/julia-tutorial/blob/master/Nume.... The benchmarks appear on page 7 of their presentation.

This statement about Julia's high-level optimizations is entirely wrong:

> i've looked at how it compiles to llvm, and it basically punts any optimization to the llvm side, aside from the most basic tracing jit method monomorphization/specialization.

Julia does no tracing at all, so it's definitely not a tracing JIT. A relatively small (but growing) very crucial amount of high-level optimization is performed on the Julia AST before generating LLVM code. In particular a dynamic dataflow-based type inference pass is done on the Julia AST. Since Julia is homoiconic, this type inference pass can be implemented in Julia itself, which may be why you missed it: https://github.com/JuliaLang/julia/blob/master/base/inferenc.... Don't be fooled by the briefness of the code – Jeff's dynamic type inference algorithm is one of the most sophisticated to be found anywhere; see http://arxiv.org/pdf/1209.5145v1.pdf for a more extensive explanation. It's also highly effective: 60% of the time it determines the exact type of an expression and most of the expressions which cannot be concretely typed are not performance critical [see section 5.2 of the same paper]. You are correct that we leave machine code generation to LLVM – after all, that's what it's for – but without all that type information, there's no way we could coax LLVM into generating good machine code. Other very important optimization passes done on the Julia AST include aggressive method inlining and elimination of tuple allocation.

> more succintly, Julia lacks a clear enough thoughtful choice in static/dynamic semantics for the pre LLVM side to have an easy optimization story given a small sized core team, such optimization engineering will either take a long time to develop, or will require some commercial entity to sink serious capital into writing a good JIT / optimizer.

There is a very clear and thoughtful choice in static vs. dynamic semantics in Julia: all semantics are dynamic; there are no static semantics at all. If you think about your code executing fully dynamically, that is exactly how it will behave. Of course, to get good performance, the system figures out when your code is actually quite static, but you never have to think about the distinction. And again, Julia already has excellent performance, and we have accomplished that with an admittedly tiny and relatively poorly funded team. (All the money in the world won't buy you another Jeff Bezanson.)

> also: Julia doesn't have a type system, it has a dynamic check system. (Show me the code for static type check in the julia code base as it exists on github and I'll owe you a beer :) )

The academic programming language community has gradually narrowed their notion of what a type is over the past decades to the point where a type system can only be something used for static type checking. Meanwhile, the real world has gone full throttle in the other direction: fully dynamic languages have become hugely popular. So yes, if you're a programming language theorist, you may want to insist that Julia has a "tag system" rather than a "type system" and other type theorists will nod their heads in agreement. However, the rest of the world calls the classes of representations for values in dynamic language like Python "types" and understands that a system for talking about those types – checked or not – qualifies as a type system. So, while you are correct that Julia doesn't do any static type checking, it is still understood to have what most people call a "type system".

[There's actually an important point of programming language philosophy here: one of the premises of Julia is that static type checking isn't actually the main benefit that's brought to the table by a type system. Rather, we leverage it for greater expressiveness and performance, leaving type checking on the table – for now. This emphasis doesn't mean that we can't add some type checking later – since we can infer exact types 60% of the time, we can check that those situations don't lead to errors. We can also provide feedback to the programmer about places where they could improve the "staticness" of their code and get better performance or better "checkability". This let's the programmer use a dynamic style for prototyping and gradually make their program more and more static as it needs to be faster and/or more reliable.]

> Let me repeat: Julia doesn't have clear static semantics / phase distinction, and doesn't really seem to do anything beyond method specialization before passing code on to LLVM. This means it can't get compelling performance compared with any other LLVM backed language.

I'll repeat myself a bit too. Julia has a very clear static semantics – there are none. The run-time does quite a bit of analysis and optimization after method specialization (aggressive run-time method specialization is incredibly important, however, so one shouldn't discount it). And, of course, Julia already has compelling performance compared with other languages, both static and dynamic, in benchmarks and real-world applications.

Extremely helpful - this really clarifies the differentiation in the Julia approach and I'm excited your team is taking this direction. There's a lot of people cheering Julia on, even if we're wimping ut remaining on the sidelines.

It would help if this explanation was a bit more prominent.

you make interesting points. I'll have to read the links and think about it.

I'm also in New York if you want to chat about it over a beer some time. I doubt I can convert a hardcore Haskell programmer on my "scruffy" unchecked, dynamic point of view, but we might have a good conversation about it. We've occasionally quipped that Julia drags Matlab programmers halfway to Haskell, so maybe there's some common ground.

Sure, that'd be fun!

I think we've had 1-2 interactions where we've not quite gotten along, but I might have just been misinterpreting (or mixing up julia devs)

yes. Julia looks like a quality language but suffers the usual new kid on teh block hassles - very few packages so you have to spend too long rolling your own.

Are there specific packages that you would currently need, but aren't there (yet), or is it a more general concern about the relative package scarcity and unforseen changes to / future needs of your code?

It's not specific, it's just my recent experience. Hadley released his bigvis package a few weeks ago, and it contains the beginnings of a weighted average routine I sure can use.

mmap is a package that let me create a fast db from scratch in a day. ggplot, lattice and all the charting routines are a joy to work with.

I'm interested in random forests today, and I bet there's a package that links to the standard libraries whatever they are. This is also probably true for python, but Julia you'd have to build the API.

Are you planning to run R in production, or is it purely for research that then gets translated to haskell?

Replying here as I can't add on another reply below

The way we do our monitoring/UI is with zeromq/protocol buffers as an external surface to the trading system (basically a PUB socket) and a very thin bridging and translation layer to WebSockets and JSON in python. That way you get to use d3 or anything else on the front end. We use backbone, knockback and d3 with coffeescript.

Thatsvery similar to what I/we use to build our realtime network monitoring tools, swapping out python for PHP (regrettably). It's validating to see other shops in different industries zeroing in on the similar stacks.

Yes realtime over websocket to a data-binding style library like knockout (I'd probably choose angular if was starting now) saves you a ton of work.

Are you using ReactPHP for the ZMQ/webSocket parts or something else?

Sounds great, but way beyond either my paygrade or budget.

I see the research as production. There's an autonomous system processing the event stream and deciding on trades that's hands free. I see that as a haskell codebase with C speedups for critical stuff, with no R anywhere to be seen.

But then I see a wrapper piece that reports to humans about what's going on within the system. I see R as a great visualization tool for that.

I've had good success so far with using R as a fast column database, solving big data hassles, but there may be better solutions in that space.

I've had a quick read through the .org file now and it appears that you're using R for things like moving averages over the stream of prices. Have you considered using one of Haskell's FRP (or pipes/conduit) libraries for this side of things?

definitely. I intend to move all of that code into haskell once I get a handle on whether the algo design is on the right track.

Hi boothead, just curious what you do when you run into the inevitable performance issues that can't be ameliorated with libraries like NumPy?

Is Cython or PyPy useful, or do you use Python more for prototyping, and rewrite some portions in C?

Python is definitely preferred on the research rather than the production side generally. Some organisations do use it on the infrastructure side too, but these guys are algorithmic traders, and not HFTs so latency isn't that big a deal.

WRT to the problem of deploying prototyping/research code, there are the following unsolved problems that I'm aware of:

* Going from matrix operations over the whole timeseries (taking care to avoid problems with your algo looking ahead) for speed in a research setting to deploying to an environment that streams updates to the timeseries one at a time. I think that this is an area that haskell has the potential to excel at, given it's strong guarantees on structure.

* Concurrency. The options in python all suck to some degree - especially if you have to interact with C libraries or extensions. I don't think it will ever make sense to build your real time market date in python. Again here haskell has an advantage.

However the python ecosystem seems to be almost perfect for researchers:

* Excellent and flexible data slurping/munging/transforming.

* numpy, scipy, pandas, theano, scikits... 'nuff said

* ipython

* Cross platform


> Going from matrix operations over the whole timeseries ... to deploying to an environment that streams updates to the timeseries one at a time. I think that this is an area that haskell has the potential to excel at, given it's strong guarantees on structure.

exactly my thought. Algo guys get stuck in matrix land because that's where their tools take them. Whereas this came out in R last week: http://cran.r-project.org/web/packages/stream/index.html

python is a better toolset than R maybe, but the R problem domain seems broader in the last few months anyway.

My interest is in monitoring thousands of algorithms in real time directly within the messaging environment, and before the data hits a database. That type of concurrency is where haskell can muscle up and do the job.

Some of the work that continuum have done on blaze [1] look to be tackling the problem of streaming in python too. In fact some of the ideas in this library come directly from haskell and one of the main developers is Stephen Diehl, who posted the very popular "what I wish I knew about haskell" slides recently.

Looks like a match made in heaven :-)

[1] http://blaze.pydata.org/

Stephen here, yes boothead is right. Using a combination of Haskell and Python you can make a really powerful trading system. Python has a lot of user-facing algorithm tools and Haskell has the robustness and parallelism for the backend that Python doesn't.

If you're interested in advice on how to bridge the two worlds let me know, there's a lot of of upcoming technology ( LLVM, Blaze, pipes, zeromq, cloud-haskell ) that could be very useful.

It does feel like I'd be swimming against the tide heading down the R path. Thanks for the offer :)

I haven't worked in anything as high performance as this (though I've been doing lots of number crunching in Python recently) but there are a couple of great libraries that you can use before you jump into Cython. Check out Numba [1] and numexpr [2]

[1] https://github.com/numba/numba [2] https://code.google.com/p/numexpr/

Are you aware of zipline ports (or similar libraries) in Ruby or ObjectiveC (or plain C)?

Are you trying to get hired by the financial industry, or is there some other reason for doing this?

As you may well be aware, HFT is a scourge on the world's economy, and it's a game only the biggest and best-connected players benefit from.

No, I'm already in the finance industry. There are two main reasons I'm doing this: - I think the finance industry is very closed when it comes to intellectual property development, and an open source approach can be seriously competitive. An open approach may well be the future when it comes to being 'connected' - HFT is an interesting multi-disciplinary problem and the shear breadth of expertise required - modern chip design, low-latency software/hardware interaction, lock-free concurrency, fault-tolerant system design, adaptive learning algorithms, k-means clustering - means I'm learning heaps every day.

I just don't agree that HFT is a scourge. It's an ecological shift (neither good or evil) and longer-horizon investors need to evolve.

Trading (as in HFT) isn't the same as investing. While the mechanics can sometimes be similar, a trader is different from an investor.

What HFT does is segment the market. It's probably best described as a form of arbitrage. Arbitrage is necessary, but in a good market hopefully it is pushed to some form of equilibrium.

HFT has an impact on the trading market. However, that doesn't automatically extend to investment.

HFT is like a million tiny trolls underneath the bridge trying to extract their tax.

... replacing one big troll on the bridge that used to extract more than the million tiny ones' total...

The reason spreads have come down is mostly due to better information on the part of the traders. Electronic markets made this possible.

"HFT is an interesting multi-disciplinary problem"

This makes this a very worthwhile project: whether or not anything comes of it (I considered writing technical analysis trading software for the exact same reason).

Some documentation about the patterns and techniques you used would likely benefit the community a tiny bit more. Not everyone understands Haskell (or, forbid, has the time to peruse large codebases).

Best of luck to you!

There are many other interesting multi-disciplinary problems that are not as contentious as HFT and probably more publically acceptable and worthwhile. But if HFT is the domain you care about, go for it.

No, long-horizon investors are normally overall not interested in HFT because they won't 'suffer' from it.

If you're already in the finance industry, how do you have enough free time to work on this?

> an open source approach can be seriously competitive

Do you really think any Joe Schmoe off the street could just grab your open source HFT and start making money with it? If not, how does your project benefit anyone?

Are you working in the industry, but not in HFT? Maybe this is project is just practise for getting a job in HFT? I imagine that's where programmers get the fattest paychecks in the world. Only a fraction of what the sociopaths running the show get, of course, but good money nevertheless.

> I just don't agree that HFT is a scourge.

Good thing you're not at all biased.

Good points. To be more precise, I think that an open source approach to system research and design can be hyper-competitive versus closed-door secret-squirrel development run by a committee looking for short-term wins. This project benefits me firstly because I get feedback on my initial scratchings and maybe even collaboration on areas outside my comfort zone. And it could well enable me and others to avoid having to work for the sociopaths.

I am biased in thinking that monitoring the market and trading at low latency isn't fundamentally a bad thing, and shouldn't be taxed because others are too lazy to do the same thing. I certainly think that hft has been used by evil people to front-run unknowing third-parties, often in collaboration with middle men and women who turn a blind eye to the morality of their business models.

Others are two lazy? It's my understanding that the average Joe has no access to the hardware and low latency network connections to be able to do this on his own.

There is a rather large economic rent attached to HFT at the moment. If my project or others like it can eat into that rent a touch, it might not stay that way.

Exactly, if you take the position that gains from HFT are basically "stolen" profits from otherwise more productive market participants, then any that can be "stolen back" is a win.

There have always been "market-makers" why do you care if they are algorithmic or human ?

> It's my understanding that the average Joe has no access to the hardware and low latency network connections to be able to do this on his own.

Yes, but for US equities this is purely economical: _anyone_ can buy colocated rackspace, direct market data feeds, and direct connectivity. The average Joe does not want to do this any more than he wants to build his own tennis-shoe factory.

Sorry, you're criticising the mindset of "short-term wins" and you're in HFT? What do you think HFT is? Wake up and smell the coffee.

> Good thing you're not at all biased.

> As you may well be aware, HFT is a scourge on the world's economy, and it's a game only the biggest and best-connected players benefit from.

Good thing you're not stating opinion as fact.

I don't get your sarcasm at the end of your post. He's stating up front in plain terms what his position is; he's not trying to hide his bias.

HFT isn't necessarily a scourge - if anything it discourages people from short term investing, which is a Good Thing.

Long term investing isn't affected by HFT, except in providing more liquidity at the point you want to exit any position. If you're planning on holding for 5+ years you're competing against relatively few people and can do good research. Any shorter timeframe and you're competing against millions, against computers and against unknowables that will distort the price in the short term.

Long-term investors trade every day. You wake up and the market has gone down a few percent, you're going to rebalance at least. You decide to buy 30 year treasuries you're not going to hold them for thirty years - you're going to sell 29.5 year bonds and buy 30 year bonds in a few months time.

And managers turn over long-term investor portfolios quickly. The average holding period for an SP500 company shareholder is 100 days.

And every time a trade happens you run the risk of getting clipped by the faster guys who see you coming.

But long term investors don't care about the "clip" that HFTs take, they care about the 80% rise in price over the next 5 years. Rebalancing portfolios again isn't something where getting a few basis points off the HFT best execution price is an issue. I work for a proper long term investor, and we trade over a period of days for the big trades. HFTs aren't even an issue apart from their occasional spectacular blowups that cause "could this happen to us?" memos from upstairs. Long term investors don't need to evolve from HFTs, they're more than happy making money not involving computers or trying to compete with them.

> And every time a trade happens you run the risk of getting clipped by the faster guys who see you coming.

I think it is natural, but fallacious, to apply line-of-sight properties to trading. If you plan to trade, then there are two ways that another participant can "see you coming": 1. If you don't have direct market access, your order gets routed through a broker. The broker sees your order before it hits the market, and if he jumps in line ahead of you then that is front-running, and a Bad Thing. Your broker can get in a lot of trouble for this sort of thing. 2. If you are trying to move a large position by sending multiple orders to the market (one after another), then all market participants have the chance to react to the first order. It's really a game to try to move a lot of inventory at once, without tipping your hand to anyone else in the room. Thems the breaks. Nobody else gets to see anyone's order before the matching engine has already processed it, so there's no way to jump "ahead" of it.

OTOH, maybe your long-term investor is trying to time the market: wait for a signal intraday, and pick that moment to send an order. In that case, if it is a good intraday signal then it is likely that someone else will compete. It is unlikely for a long-term trader to have spent as much on infrastructure as a HF trader, so the juicy signals will result in missed executions that _look_ like front-running.

Yes, very loose terminology. I was thinking of a combination of two events:

1. VWAP based trading of large positions, which creates assymetric momentum effects in volume and price (and which is then somewhat forecastable)

2. Not then recognizing that you are forecastable (as a result of playing the VWAP game). The extra information that someone who looks at intraday price relationships has over someone who doesn't. If you wade into the middle of a market that is short-run forecastable (eg it's trending downwards to a new level and the market maker/HFT guys are battling their battles), and you don't check as to whether it's short-run forecastable, then you're probably the patzy at the poker table.

It's weird how this "providing liquidity" distraction never seems to go out of fashion. HFT provides liquidity if you happen to be another HFT algorithm. Otherwise, not so much.

Why would short term investing be bad, and how long do you think HFT algorithms hold their "investments"?

I hope you're not just trying to blow smoke up my ass here.

It isn't just about liquidity, though it does. But it also reduces the bid/ask spread, i.e. price discovery. The HFT trader may make a tiny profit when it processes a trade from me, but I've benefited because I can buy/sell at a price close to the quoted exchange price, which didn't use to be the case. In terms of liquidity those trades are available to all, they love buying from non-HFT trades because they can make a small profit from their less accurate pricing. You benefit from the lower spreads and that liquidity. Before HFT you'd have paid a greater cost due to the spread on the price.

Short term investing is "bad" because you're not investing in the future success of the company you're trading on the basis of what you think the market is going to do. To the extent to which that is ever knowable it is unlikely you have the skill, experience and data to be able to beat the large number of professionals doing it.

Investing in company stock is much more about investing with a global pool of liquid capital seeking risky asset exposures these days. When global capital changes it's mind (in whatever direction) those watching the market closely will be the first to know and the first to profit from the market signal. You can go broke investing in companies that look like they have a bright future.

It sounds like you work for the financial industry too.

> The HFT trader

You say it like it's a person making trades.

> I've benefited because I can buy/sell at a price close to the quoted exchange price, which didn't use to be the case.

What's a "quoted exchange price" if it's not the price you actually pay?

> Short term investing is "bad" because you're not investing in the future success of the company you're trading on the basis of what you think the market is going to do.

Isn't this what HFT is all about?

Trading on insider information is bad, of course. But if you're on a level playing field, I don't see anything wrong with profiting from something you correctly predicted short-term.

> What's a "quoted exchange price" if it's not the price you actually pay?

The actual price you have to pay is based on how much you want to buy and how much other people are willing to sell at what price.

People who want to buy give a "bid" price and people who want to sell give an "ask" price. Whenever there is an ask price that is lower than a bid price, a sale takes place. This results in the lowest ask price always being higher than the highest bid price. The quoted exchange price will be somewhere in between the two prices.

If you want to sell right away, the most you can get is the highest bid price on the books. If you want to buy right away, the cheapest you can get it is the lowest bid price on the books. This means that you have to pay more than the quoted exchange price to buy and receive less than the quoted exchange price to sell.

I do - but a 'proper' long term investor that cares about the price in 20 years time.

The quoted exchange price is an average of the buy/sell offers available, there is no guarantee if you try and trade that you will pay that amount. Especially if you are shifting large blocks of stock.

There is nothing wrong with "short term predicting" but it isn't investing. It is often a very easy way to lose money, "bad" doesn't just mean morally so. It comes down to time horizons, how much you want to make, and how many people are also trying to predict that event. You can correctly predict that event and still not make money as it is already priced in.

> What's a "quoted exchange price" if it's not the price you actually pay?

The mean of the bid and the ask, generally. For a daily close price it can be a fifteen-or-so minute average thereof.

>HFT is a scourge on the world's economy

I'd point out that a lot of the things that are going on in HFT, are rehashes of old trading scams, and either are or would be illegal if there was any adult supervision. I was tempted to say something about the SEC being left behind the technology, and doesn't understand it. But for that to be true, and have this crap go on for so long, they either need to be complete fools, or they benefit from the status quo.

>and it's a game only the biggest and best-connected players benefit from.

That's definitely true in the case of latency arbitrage. But here again. It's the exchanges themselves that have decided that some players get an advantage over others. As an example, latency arbitrage, and a whole host of other problems could be sorted out by putting traders on exchange-hosted virtual machines. http://www.dailyfinance.com/2010/06/05/rigged-market-latency...

If not virtual machines, there are other ways to fix these problems, but first you need to get the players to acknowledge that there is a problem.


Nanex has done a lot of good analysis of HFT, some of which is published here:


Nanex is wonderful to read, but IMO they frequently misinterpret the data. Off the top of my head:

- They only use consolidated feeds for US equities, never direct market-data feeds. The consolidated feed necessarily contains less information that direct feeds (to satisfy more stringent bandwidth requirements), which masks some "interesting" effects of how the exchanges publish their data.

- They disregard that the CME feed publishes a fixed depth-of-book, and whenever they look at total liquidity in the book it can appear to flicker when deep levels fall "out" of the back of the book, even if liquidity is actually improving with the presence of a new inside level.

- They make a big deal about wholesaler matching only occurring when the consolidated book is not locked. Their rationale is that subpenny prices are always wholesalers, and (erronously) therefore a lack of subpenny-priced trades must mean a lack of wholesale matching.

These mistakes sound believable, but they do not hold up to any of scrutiny. Use their site to find interesting events, but be very careful about taking their conclusions at face value.

Edit: bullet-list formatting

That's why I thought the DNQ iqfeed was a suitable choice - no consolidation at all. Would love to know what the "interesting" effects are (beyond the fixed book CME effect) - will be looking a bit harder at the stream to find them!

> As you may well be aware, HFT is a scourge on the world's economy, and it's a game only the biggest and best-connected players benefit from. This is the line peddled by the popular press but it isn't true. HFT works to reduce spreads, which benefits both buyers and sellers.

Latency arbing is a scourge, and makes limit orders practically useless. Quote stuffing, would be illegal if trading were still done on little slips of paper. Imagine dumping 10,000 slips of paper on the trading desk, and then shouting "just kidding!" In the words of Lawrence from Office Space, "You'd get your ass kicked" Or even worse, 10,000 empty bids. That's why people hate HFT.

I'm not trying to be argumentative, but I am curious why you find latency arb so problematic, and what you think it's impact is on limit orders?

I for one think latency arb is one of the bigger net wins for hft. As a market participant, each venue I have to maintain a presence at is a cost to me. I'm willing to pay the latency arb shops their cut to provide me price consistency because for my models it is much cheaper to do so than to continually reevaluate and redeploy to every possible venue. It frees me to shop for venues that provide the best features and fees.

As for quote stuffing, you are absolutely right it is awful. That's why almost every venue out there has taken or is taking steps to curtail it. They did so because their customers agree with you.

> I'm not trying to be argumentative, but I am curious why you find latency arb so problematic

That is a loaded term. "Latency arb" as you described it is HFT keeping all protected exchanges synchronized, and it is a good thing. It means that everyone else can ignore the 13 exchanges, and send their orders to the market with the most competitive pricing for connectivity. "Latency" arb" as described in most literature critical of HFT is the specific practice of submitting and canceling non-bona-fide quotes to an exchange with the intent of slowing down the matching engine. If you can slow down the matching engine that most other participants are using, you can effectively delay the public response your actions on the other 12 exchanges. A trading strategy that operates on the basis of a DoS attack on one exchange is definitely problematic.

What you are describing is quote stuffing. Nearly every participant in HFT/algo trading agrees that it is a problematic practice. Nearly every venue has either enacted or is enacting policies and procedures to either prevent it or severely penalize it, because that is what all of their customers want.

> What you are describing is quote stuffing.

Yes. Sometimes it is referred to as latency arbitrage. Hence why people _nominally_ talking about that subject may, in fact, be talking about different things.

> Nearly every venue has either enacted or is enacting policies and procedures to either prevent it or severely penalize it, because that is what all of their customers want.

Regarding penalties to prevent or penalize it, I disagree. NASDAQ's policy uses a ratio weighted by distance from the top-of-book. There is no penalty for excessive order submit-cancel loops at the top-of-book. Their matching engine also operates in a fashion which specifically encourages cancel/resubmit loops at the top-of-book, in that they accept and subsequently display limit orders at a different price than submitted. If they were serious about preventing quote-stuffing, they could fix this simply by rejecting those orders. Presumably they either don't care (because their system doesn't get bogged down), or there is pressure from some big customers to keep the status quo here.

A) Top of book quote stuffing is a pretty dangerous game to get into. Can't confirm that no one is doing it, but I'd be surprised if it were a big problem. B) NASDAQ is one of the better technology platforms and does not experience as much lag as the others, so it could be as you say, they just don't care. C) the charges away from inside market are only one of several ways NASDAQ discourages excessive quoting. They also have tiered rebate pricing based on quote/fill ratios and PSX has a order rest time requirement.

So I guess if you were a shop that didn't mind playing with fire quoting top of book, and you never actually wanted to market make on NASDAQ, you could still quote stuff them.

I stand by my statement that venues continue to enact penalties to discourage quote stuffing and is such is not nearly the problem people make it out to be.

As a market participant, if the NASDAQ is not providing you with an execution platform to your liking (whether due to laggy matching or anything else) you are free to choose another venue and thanks to latency arb shops you are probably not going to pay much of a price premium to do it.

The biggest problem in all discussions of HFT/algo trading especially when related to internet forums and expose reporting is people using incorrect terms. We don't let people get away with it in other technical settings because it leads to unnecessary strife. I think the same thing applies to electronic trading.

Quote stuffing is differentiated from plain old vanilla latency arbitrage by fact that the purpose of quote stuffing is to cause artificial latency to then take advantage of. IMO one can still make defensible arguments in favor of latency arbitrage when they have invested in faster technology than other market participants. There is certainly no defensible argument for causing the latency and then taking advantage of the problem you caused to extract a profit.

'Quote stuffing' is there, because of absolutely ridiculous SEC rules. If you'd put in fractional prices and remove NBBO rules, there wouldn't be any 'quote stuffing'. And latency arb. What's wrong with latency arb? How else can you move information from one exchange to another fairly?

Think of HFT's as of part of the financial network infrastructure. Where exchanges play role of nodes and HFT companies role of links/queues/buffers.

No worries, I'm specifically talking about the folks that are constantly testing for buys with tiny (and fake) orders and incrementing the ask until they find the limit, then buying all they can, holding for only milliseconds and reselling at the higher price. Queue jumping tricks like that which really add no value.

I'm curious about the "best trading platform" claim - what is going to make it better than any other? Especially as the real High Frequency Traders are busy spending fortunes on placing their hardware as close to the exchange as physically possible.

Speed is only one issue with autonomous algorithm design. Yes, the speed thing grabs the headlines but the boiler-plate objective is to front-run the slower players. There's a wealth of opportunity in processing the event stream in a more robust way than others and faster too. Think semi-HFT, semi-autonomous.

Most trading platforms are primarily loss leaders for the 'professional' version and otherwise attached to a non-open source business plan.

But doesn't that mean you're at the mercy of the professional HFTs who can "front run" anyone running on your platform?

You've hit a pet peeve of mine so sorry about the tirade. People use the term "front run" incorrectly and it is important to point this out. Front running is a very specific activity and is illegal. If there is evidence of front running people need to be prosecuted.

That said, I've worked on several systems and never seen any of them that would allow a faster market participant to see your order flow without your permission. This is what front running is.

People who don't understand the term seem to think seeing market data before other participants is front running it's not. There have always been and will always be some traders that see (and can react to) market data faster than everyone else. In the era of electronic trading this information latency cost is going down, not going up.

FYI: This isn't an HFT bot but is a decent, recent Python + ncurses trading client framework for Bitcoin on MtGox. http://prof7bit.github.io/goxtool/

Even if you aren't interested in Bitcoin, it might be useful as a real-ish and cheap place to test with low barriers to entry. I've found that backtesting on historical data is usually not realistic enough since most people fail to consider liquidity.

I'm interested because I like to trade, would like to learn more, and I have had some small success. But I have trouble sticking to my plan. I let emotion get the best of me and wind up losing my gains.

Isn't MtGox over as a bitcoin trading platform? I agree more generally that bitcoin could be a useful market to test out.

And your story fits exactly with a hands-off automated approach to trading. We wont ever be charting a price series in the production environment because computers don't get charts.

>Isn't MtGox over as a bitcoin trading platform?

I wish. I keep waiting for some more interesting news from Coinlab [1][2], but so far, Mtgox are still the largest exchange by volume. Here is a chart showing volume by (selected) exchanges. http://bitcoincharts.com/markets/currency/USD.html

The recent events seem to only have got them more customers (though it remains to be seen how many of them will stick). In any case, some of the other exchanges have API's that are similar to mtgox.

[1] https://mtgox.com/press_release_20130228.html

[2] http://www.forbes.com/sites/jonmatonis/2012/04/24/coinlab-at...

Isn't the barrier to HFT the fact that you need enough capital so that your profits cover the cost of co-located servers and FPGAs in the exchange datacenter, without which you have a latency handicap? (in addition to, of course, coming up with good algorithms)

Yes, if the only edge you have is speed. But a good algorithm may well be the next battleground as the benefits of ultra-low latency reduce.

Cool! Very interesting.

I think given the languages you have selected you are coming more from a quant background? These languages are great for heuristics and analysis but you would really want all 'static' components such as connectivity built in assembly/C/C++. For 'algo' components I like Java as you can still pull microsecond order latencies when crunching numbers but more importantly it gives you a huge time to market advantage than the C/C++ for almost the same speeds. I'm also not clear on if you are connecting directly to the market for market data or using aggregation (like reuters). The latter would be too slow. I'm also not clear on what middleware you are using which is probably the biggest decision you will have to make. Most either use inhouse tech or 29west LBM (everyone still calls it that even though they were bought out).

An overlooked part of HFT in my experience OS optimisation and even things like TCP bypass (for some components) which can lead to huge speed advantages and end to end latency reductions. I agree with those about FPGA.. in my experience they really don't come into the equation except for components that rarely change.

Anyway a few guys including Martin Thompson have felt similarly to you and initiated the lodestone project (http://lodestonefoundation.wordpress.com/). If you are keen to learn more their architectures for low latency, distributed and componentisation then I feel that you could join forces and contribute to that initiative. FYI: Martin built most of the technology behind the disruptor (http://lmax-exchange.github.io/disruptor/).

Great to see interest in making this knowledge more widely available :)

thanks mangrish,

quant. How could you tell? I'm conecting with an aggregator cause the direct market feeds are monopolistic price gougers. It's an easy switch if we make it up that curve.

I'm not sure if I even understand what middleware is (I'm a quant), but I think the answer is the disruptor!

From the languages.. I've worked with lots of quants and they all rave about R!

Yeah..the aggregator is where you get done in both cost and latency. The prop shops and hedge funds pay through the nose for that stuff so unless you come packing a little capital, true HFT is an issue.

On the middleware, not really.. so you would have a market data component that will be pushing stuff to various components (real time risk, the pricer, and the trading engine). The disruptor sits on the 'in' queue to those components.. the middleware is what pushes messages between instances running on the same/different machines.

Hope that helps :)

I get the sense that the Lodestone project is now defunct.

In another month or two the analytical tooling for doing stats and numerics in Haskell land are going to have a huge leap forward in capabilities. Might be worth considering going full Haskell then :-)

Whats happening then?

From my talks with Carter previously he's working on a haskell based platform for analytic tooling in Haskell. Basically the core primitives for making large scale data analysis apps in Haskell.

Or that's what he was up to in august. Hopefully he can chime in here and update everyone on his progress. I honestly hope he succeeds in his plans.

Still working on it!

Been taking a bit longer to get the core worked out that I'd have liked, but life happens (eg my mom had cancer for a month this winter, though she's fine now, which is awesome. She didn't even need chemo or rad!!).

Also, I was original planning to NOT write my own linear algebra substrate, but I quickly realized all the current tools suck, and that I needed to come up with a better numerical substrate if I wanted to do better.

What do I mean by this? With all the numerical tools out there presently, there are none that address the following falsehood that many folks believe is true: "you can have high level tools that aren't extensible but are fast, or you can have low level tools that are extensible and fast.".

I want high level tools that are fast. I want high level tools that are fast AND extensible. I want it to be easy for the end user to add new matrix layouts (dense and structure, structured sparse, or general sparse) and have generic machinery for giving you all the general linear algebra machinery with only a handful of new lines of code per new fancy layout. I want to make it idiomatic and natural to write all your algorithms in a manner that gives you "level 3" quality memory locality. I want to make sure that for all but the most exotic of performance needs, you can write all your code in haskell. (and by exotic I mean, maybe adding some specialized code for certain fixed sized matrix blocks that fit l2 or l1, but really thats not most peoples real problems ).

Heres the key point in that ramble thats kinda a big deal: getting "level 3" quality memory locality for both sparse and dense linear algebra. I think I've "solved" that, though ultimately the reality of benchmarks will tell me over the coming few weeks if I have or not.

Likewise, I think I have a cute way of using all of this machinery to give a sane performance story for larger than ram on a single machine linear algebra! Theres going to be some inherent overhead to it, but it will work, and doing a cache oblivious optimal dense matrix multiply of 2 square 4gb+ ish sized matrices on a macbook air with 4gb of ram is going to be be a cute benchmark where no other lib will be able to do out of the box. Likewise, any sparse linear algebra will have lower flops throughput than its dense equivalent, but thats kinda the price you pay for sparse.

What I find very very interesting is that no ones really done a good job of providing sparse linear algebra with any semblance of memory locality. I kinda think that I have a nice story for that, but again, at the end of the day the benchmarks will say.

I at the very least hope the basic tech validates, because there needs to be a good not gpl lin alg suite with good perf for haskell. Hmatrix being gpl has cock blocked the growth of a nice numerics ecosystem on hackage /in haskell for years, and its about time someone puts on some pants and fixes that.

Assuming the tech validates, I really hope the biz validates too (despite me likely making various pieces open source in a BSD3 style way to enrich the community / get hobbyist adoption / other libs written on top, people in haskell land try to avoid using libs that use licenses that arent BSD/MIT/Apache styles), because theres so much more that needs to be done to really have a compelling toolchain for data analysis / numerical computation / machine learning / etc, and I really really like spending my time building better tools. Building the rest of that stack will be outlandishly tractable assuming my linear algebra tech validates having the right regimes of performance on large matrices. (amusingly, no one ever benchmarks linear algebra tools in the 1+gb regime, and i suspect thats because at that point, vectorization means nothing, its all about memory locality memory locality, and a dash of cache aware parallelism).

thats the vague version :)

And thats also not even touching my thoughts on the analytics / data vis tools that go on top. (or the horrifying fact that everyone is eager for better data vis tools, even though most data vis work is about as valuable as designing pretty desktop wall papers to background your power point presentations.... so even if i get everything working... I have a horrifying suspicion that if i allowed unsophisticated folks to use the tools, most of the revenue / interest would be around data vis tooling! Which would mostly be used to provide their customers/end users with pretty pictures that make them feel good but don't help them!)

Point being: i want to be able to say "you understand math, you understand your problem domain, and you can learn stuff. Spend 2-3 weeks playing with haskell and my tools, and you'll be able to focus on applying the math to your problem domain like never before, because you didn't even realize just how terrible most of the current tools out there you were wrestling with are!"

I really really really hope the biz+tech combo validates... because then I could occasionally stop and think "holy fuck, I'm bootstrapping my fantasy job / company, the likes of which I imagined / dreamed of as way back as middle school and high school!"

Realistically theres 3 different outcomes:

the tech doesnt validate (and thus the biz doesnt either) --- then i'm looking for a day job ... (and I'm pretty darn egomaniacal and loud, finding a good fitting dayjob would take a bit of work!)

the tech works yet the business doesnt --- Not sure how that would happen esp since no investors means enough income to support myself would still be a successful business, though I guess i'd have some compelling portfolio work if I went job hunting

the tech and biz both validate, and earning enough to move out of my parents --- magic pony fantasy land of awesome. what more could anyone want? MORE AWESOME PROBLEM DOMAINS THAT NEED BETTER TOOLS (i mean, that would really be sort of the ideal, but remains to be seen if that can happen.)

Thanks for the update Carter. I'll root for you.

one point to make. The interface and the performance don't have to appear at once. The interface will be the longer lived portion. So sort that out, and you can focus on performance as problems crop up.

I know it's horribly boring to say, but getting those first few customers gets you into a virtuous cycle. Given that you're bootstrapped, even a few customers will get you in a very good place, where you can spend on development.

I remember us (or rather I) talking about Mathematica. When it first came out it was horrible for numerics. Truly terrible. But it was easy to transfer technical papers to. You simply wrote down what was on the paper, and you were done.

So people used it, and eventually performance got better over time as they invested in it.

Agree with everything you say. Hence why I'm just going to be releasing the Lin alg soon. It actually turns out for the linear algebra code done right, the API has an intimate relationship with the possible performance! (This will be more apparent once I get things out the door).

There's a lot ill not be even trying to do in the first release : eg parallelism, distributed computation, sse/avx intrinsics.

Fret not, things are moving apace, and basic tech validation and thence conditioned upon that, public release, are approaching scary fast! :-)

you know exactly what i'm up to slowly with wellposed :)

(building numerics/ data analysis tools that dont' fill me with rage and ire over terrible engineering and usability. A matrix / linear algebra kernel of tools are on track to be ready for hackage release + paid pro versions in 1-2 months)

Classical landing page had me hooked anyways :-)

thankee good sir. I'm glad some people appreciate my vague semblance of prioritization skills. Hopefully that pans out to being able to have any early customers be of the sophisticated sort that I'm excited to work with / help!

What is the name of your library? Or where do you post news?

It's not out yet. If you really want to hear about things as they happen, signup for the announce list linked to from www.wellposed.com , I've yet to fire off any emails to that list, but I anticipate 2-3 emails over the next 1-2 months (after a year of hard work and focused thinking).

What's also kinda awesome is I think the alpha release with all the Lin alg functionality should be under 2k loc. a lot of the work has been figuring out how to make the design composable and extensible enough that I can write a first working version with good performance just on my own. Ironically, that's also a compelling way to Validate that my tech delivers.

Hey, I'm the author of the [HLearn library](http://hackage.haskell.org/package/HLearn-algebra). I just got two papers accepted into TIFP and ICML about the algebraic nature of machine learning, and how we can make machine learning algorithms both fast and user friendly in Haskell. I plan for a major update of my library in about a month to make it up-to-date with these research contributions, and I'd love to chat about how we can work together to make this a reality. My email's in my profile.

Oh cool! I've seen your stuff, it's very very neat, but gpl makes me nervous about looking at the code / playing with it much.

Yeah, we should def chat! My emails also in my profile. Lets one of us get around to emailing the other sometime this week.

Cool! Thx guys. Did you look at this library: Cloud Haskell/distributed-process? [1]

[1] https://github.com/haskell-distributed/distributed-process

It's promising but still needs a lot of higher level tools before its usable by a none expert. Distributed systems done right are hard. Lets go shopping.

1. Timestamps in your seed data might benefit from nanoseconds if you really talking about "high" frequency.

2. I agree with you comment that it is easier to think about concurrency in Haskell than in something like C++, however you can't really compete with C/C++ in Haskell. Not even with cgo (Go packages that call C code), not with OCaml or any other higher level beasts that promise the speed. Fortran would be the only one faster for the "algo" part of your initiative. But again, if this is just an exercise, Haskell and others (I prefer Clojure for example :) will do just fine.

3. Would make sense to split the "platform" in two (very different) parts: "Quantitative Analysis" (a collection of tools and rules) and "Technical Glue to Read and Stream". Each can/should be divided further of course, but the two above are essential yet very different for a true "HTF Platform".

I wish I could get down to nano units. iqfeed (a good value feed) just got millisecs in so will settle for that.

I'm preparing some speed tests between C++ and haskell on an identical block of processing so stay tuned! You might be surprised - haskell is way ahead of clojure on compiler smarts.

The split you suggest is exactly what I think is wrong with the way things get done right now. I'd like to integrate the quant inside the read and stream - now that's potentially a large speed up that might compensate a tight budget.

Keep in mind that if you're not an experienced C/C++ coder, you're going to be (largely) benchmarking your relative ability in either language rather than the intrinsic speed of each language.

This is true of any language/coder. It's what makes cross platform benchmarking so difficult for non-trivial problems.

I'm building an open source matching engine. I would love to pair up the two systems for a stress test. I'll keep tracking your project and ping you again when I have a system up a running if you're interested.


We have some plans on the drawing board for market maker simulators that could do with some fast matching so I'll keep an ear open for that ping.

What sort of performance baselines would you require?

I have no idea yet. I see you're geeking out in go and there's been a lot of advocacy coming through to look at go as an alternative.

Yeah, Go is a great language. However, I am trying to develop some low level queues that should be faster, and less generic, than the ones that Go provides. But I am having some difficulty putting memory fences into my code.

But if I can get a reasonable level of performance out of the standard Go queues then I will just use those in the meantime.

I see a comment about ITCH in there... Are you modeling your interface after NASDAQ's ITCH/OUCH interfaces? If so, I think it would be interesting to look at the protocol specifications for all of the other major US equity exchanges. I think they are all freely-downloadable.

I was using ITCH data to try to test the matching algorithm. However, I found that the number of hidden trades made it fairly poor for validation, which was what I was most interested in.

I _was_ using the ITCH format for messages and switched away from it. I may revisit that decision in the future.

I agree that it would be really interesting to look at the messaging protocols of real exchanges, and I have made a quick stab at it. But I am focused on getting something very simple working right now.

If you wanted to play around with ITCH files the executable generated in the itch directory will run any ITCH file through the matching engine sort of like a debugger, allowing you to step forwards and backwards in time and see how the internal state of the matching engine changes.

I worked on a framework for managing stock and options positions based on feedback methods developed by Stafford Beer. It doesn't do the algorithmic/transaction part, but rather focuses on providing reporting and alerting for large hierarchies of institutions, accounts, stock and options positions. The proprietary algorithmic code could be added by subclassing the code that I wrote. It's called the Viable System Agent. It's written in Smalltalk, tested under both Squeak and Pharo. It's licensed under the BSD license. It can be found at:


Look at the RBC-VSA-Portfolio category. That's where all the stock/option code is.

The industry is doing the vast majority of its HFT development in C++ or Java. Data analysis is mostly done using Python although I still see a lot of R.

If you are doing this to break into the industry, I suspect the languages you used should have been the above. Also the above languages would probably have been better to attract open source developers who are also hoping to use their code and experience from this project to break into the industry.

I've been in the industry for too long (though not in low-latency) - I'm doing this to break out of the industry! And I think the finance game is long overdue for a good disruptive technologic event (not implying my humble project is it). R is a personal choice for data analytics (I love ggplot2). But the haskell thing is more than that. It's been done before (http://www.starling-software.com/misc/icfp-2009-cjs.pdf) and even goldman's has a thriving erlang/OCaml hacker ethic (http://www.zerohedge.com/article/aleynikov-code-dump-uncover...).

He didn't really mean HFT. At least I don't think he did.

I have no experience in the financial industry, but HFT has always fascinated me as a potential source for very high rates of "events". Could you share your general insight about the sheer volume of data that commonly gets pushed through an HFT system? I'd also be terribly interested in a multi-megabyte/gigabyte "recording" of HFT trade data.


This project is not HFT. It does not use a full book data feed. Really even the feeds that come from the exchanges are not precisely timestamped enough and HFT firms stamp their own . The data is not huge although trading system need to be able to deal with large spikes in the rate of events. Usually now people are using 10G/40G/Infinibad as a connect to the matching engine.

So, yes, on my toy wannabe hft feed, I'm currently clocking about 10 million ticks per day on about 8 futures tickers I'm tracking. The raw feed is between 1 and 1.5 gig per day. I can imagine the petabyteness of the bigger guys.

This could be an interesting course for people wanting to know more about 'computational investing' and the algorithms behind stock (market) analysis and trading: https://www.coursera.org/course/compinvesting1. Not specially targeted at HFT, but interesting basic background information nevertheless.

Sorry to burst your bubble but high-frequency trading is only viable and necessary for market makers. The end result of any type of algorithm you think of is always a curve-fitting function of existing data.

No need to apologise - curve fitting is a big issue. But I would think that everything that happens on kaggle would be curve fitting by your definition right? And how is the stuff that happens in brains something other than curve fitting? It's how you connect the data dots and what you choose to focus on that makes the difference.

You are asking how the human brain works, that's a whole different issue and I don't know what Kaggle is/does.

Check out IBrokers: http://cran.r-project.org/web/packages/IBrokers/index.html

It'll get you up and running with IB in no time.

On a similar note, a while back I wrote a bi-directional, fan-out adapter between the Interactive Brokers API and ZeroMQ.


The annoying thing about the IB API is that there's no framing; that is, you can't simply consume the message types you are interested in without parsing the entirety of every variable length messages.

ib-zmq resolves this annoyance by parsing incoming messages and placing them individually into ZeroMQ message frames.

I also wrote an alternative to the IBrokers R package with a much nicer interface using this ZeroMQ adapter. It parses most IB API messages, but hasn't been used in production yet.


Yeah, IBrokers is great. R is even better! Everytime I need a specialist piece of code it seems to already exist as an R package and to have been released in the last few weeks.

I assume you've checked out Marketcetera ( http://www.marketcetera.com ). What will be your platform's primary differentiators?

Not having to register and login is one immediate differentiator. Marketcetera is a big, big code base - I think our ambitions are more focused.

Good to see Haskell. I must give you a complement for being part of pushing Haskell to the mainstream!

BTW, many trading firms do use haskell in Trading, but thats all privately held.

On a possibly related note, Josh Levine, creator of the Island trading engine, released the Foxpro source a few years ago.. Interesting stuff..



Jerk boy!

I would be much more interested in an open source solution to fight against HFT, or at least make it more difficult.

I see the financial world as a necessary evil, and I think some of its parts have benefits for the rest of the world. HTF is not one of those parts; I see it as a pure burden.

You could have made an open source Patent Trolling system and I would feel the same.

Wow, that's a pretty low ranking being shoved into the patent troll category of evilness. When someone goes and patents an obvious algorithm at some point in the future and my little project shows up as prior art would I be redeemed :)

I thought HFT was now based on FPGAs and ASIC circuits ... how could this compete in terms of latency/processing ?

By putting it on FPGAs maybe. Stage one is to get a robust process for measuring and researching latency - I'm in Australia and US data is bouncing into my mac via a lousy connection and a tcp port in 250 millsecs on average. There's a long way to go. I suspect that if you can keep within a bulls roar of the low-latency crowd the big gains will be on the algorithm processing side. Algorithms still add up a few million numbers every time they recalculate a moving average - there's better ways to do it.

FPGAs and ASICs are rare -- at least the HFT firm I work for, they're basically non-existant.

You need to do <i class="icon-iconname"></i> instead of <i>icon-iconname</i>... The links all have icon names in them.

Edit: Ah. It now works only with JS on when putting the icon-name into <i> and not in the class. Sorry for that, seems to be new in Bootstrap.

It's Twitter bootstrap. http://twitter.github.io/bootstrap/base-css.html#icons Bootstrap V3 has tweaked the way icons are implemented slightly.

thanks heaps for that!

Aside from all the other points raised against this, there's also the fact that the timing on HFT is so close that it requires close colocation of the physical hardware to the trading systems. Only big boys get to have access to those racks.

I have access to servers in all the major trading rooms if you need to start deploying this to a production environment to start trading. Contact me at dasickis [at] gmail.com (I'll reply back from my non-filtered e-mail address)

> using haskell,R and emacs org-mode.

That last is so very, very, cool.

I'll have to dive in, take a look, see if I can find out _how_ you're using it. Been wanting to do a project with org-mode for a while, now.

It is - I can't imagine working without it. But I'm not sure I'm the poster child for how to use org-mode properly. Every piece of code in the repo actually resides in the org file which is a big monolithic journal of where I'm at.

o-blog is a great emacs package that lets you publish sites straight from org.

+1 for using Haskell.

How can someone who has absolutely no idea about the financial industry and HFT learn about it?

Can anyone recommend any resources (books, tutorials, etc..)?

mikevm, check out Dark Pools by Scott Patterson. Great book about the evolution of HFT:


Which software you used to build the diagram on the main page?

The graphviz dot language: http://www.graphviz.org/Documentation.php

It has good support in org-mode and a there's a nice haskell package that takes the dot code and turns it into an internal graph representation.

Have you actually used this in a production setting?

No, and I am making no claims. I have a market event feed coming in, a good idea of what the event processing looks like and a rough idea of how to send an order to a broker. I think the project needs to get to production fast though.

you're doing it wrong: haskell is too slow.

I thought so too for a long while, then I tried to do a touch of concurrent code in c++ and had to gouge my eyes out. I'm excited about the speed up haskell brings to development. You can plan things in haskell you can't imagine in other languages.

if concurrency with C/C++ is bad for you, go makes it very easy. however, it is not very fast either.

doing it in go and then using cgo where necessary will get you pretty close to C speed.

+1 for doing it in Go. If concurrency is your objective, Go makes it easy... it's also very fast both to program in and to run.

> +1 for doing it in Go

Conspicuously you don't address Haskell at all. Then again, who cares what might actually be best for OP? There's an advocacy bingo card in play!

How much experiance do you have in programming, for claiming that?

Interestingly I had this exact conversation last week with a guy from an HFT firm that was asking about how do you write fast code in haskell. Haskell won't be as fast as C, but we are not a high frequency shop so it doesn't matter. Much time in engineering at an HFT firm is spent doing things like turning an L2 cache miss into an L1 cache miss. Haskell makes it hard to do things like that.

Our platform is fast and could be used for some slower strategies. If we replaced pieces of it with C we could be competitive but probably never less then 10 microseconds wire to wire.

Haskell is actually quite efficient and tends to be excellent for concurrency.

But how elegant it is, umm... :)

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact