Back in my last startup, I was doing a crypto market intelligence website that subscribed to full trade & order book feeds from the top 10 exchanges. It handled about 3K incoming messages/second (~260M per day), including all of the message parsing, order book update, processing, streaming to websocket connections on any connected client, and archival to PostGres for historical processing. Total hardware required was 1 m4.large + 1 r5.large AWS instances, for a bit under $200/month, and the boxes would regularly run at about 50% CPU.
I'm more than a little annoyed that so much data engineering is still done in Scala Spark or PySpark. Both suffer from pretty high memory overhead, which leads to suboptimal resource utilization. I've worked with a few different systems that compile their queries into C/C++ (which is transparent to the developer). Those tend to be significantly faster or can use fewer nodes to process.
I get that quick & dirty scripts for exploration don't need to be super optimized, and that throwing more hardware at the problem _can_ be cheaper than engineering time, but in my experience, the latter ends up costing my org tens of millions of dollars annually -- just write some code and allocate a ton of resources to make it work in a reasonable amount of time.
I'm hopeful that Ballista, for example, will see uptake and improve this.
To my amusement, my little SQLite prototype smoked the “enterprise” database. Turns out that a MacBook Pro SSD performs better than the SAN, and the query planner needs more tlc. We ended up running the queries off my laptop for a few days while the DBAs did their thing.
I was thinking about how they must have a routine that’s constantly taking mouse input, buffering history, and running some algorithm to determine when user input is a mouse “shake”.
And how many features like this add up to eat up a nontrivial amount of resources.
In some places there's no room left for unnecessary abstractions, I can imagine most of the code touching mouse / cursor handling is in that category.
Looks like it's measuring something like number of direction changes in relation to distance traveled; ignoring the y axis completely.
Truncating the impulse response after five time constants wouldn't really change its output noticeably, and even if you truncated it after two or three time constants it would still be inferior to the box filter for this application, though less bad. So in that sense the problem isn't that it's infinite.
Likewise, you could certainly design a direct-form IIR filter that did a perfectly adequate job of approximating a box filter for this sort of application, and that might actually be a reasonable thing to do if you wanted to do something like this with a bunch of op-amps or microwave passives instead of code.
So the fact that the impulse response is infinite is neither necessary nor sufficient for the problem.
The problem with the simple single-pole filter is that by putting so much weight on very recent samples, you sort of throw away some information about samples that aren't quite so recent and become more vulnerable to false triggering from a single rapid mouse movement, so you have to set the threshold higher to compensate.
It turns out that pretty much any time you have code that interacts with the world outside computers, you end up doing DSP. Graphics processing algorithms are DSP; software-defined radio is DSP; music synthesis is DSP; Kalman filters for position estimation is DSP; PID controllers for thermostats or motor control are DSP; converting sonar echoes into images is DSP; electrocardiogram analysis is DSP; high-frequency trading is DSP (though most of the linear theory is not useful there). So if you're interested in programming and also interested in graphics, sound, communication, or other things outside of computers, you will appreciate having studied DSP.
int m = abs(dx) + abs(dy); // Manhattan distance
c -= c >> 5; // exponential decay without a multiply (not actually faster on most modern CPUs)
c += m;
s += m; // update running sum
size_t j = (i + 1) % n; // calculate index in prefix sum table to overwrite
int d = s - t[j]; // calculate sum of last n mouse movement Manhattan distances
t[j] = s;
i = j;
int s = t[i] + m;
Once you've computed your smoothed mouse velocity in c or d, you compare it against some kind of predetermined threshold, or maybe apply a smoothstep to it to get the mouse pointer size.
Roughly I think WanderPanda's approach is about 12 RISCish CPU instructions, and nostrademons's approach is about 18 but works a lot better. Either way you're probably looking at about 4-8 clock cycles on one core per mouse movement, considerably less than actually drawing the mouse pointer (if you're doing it on the CPU, anyway).
Does that help?
Possible but unlikely. Well-written desktop software never constantly taking input, it's sleeping on OS kernel primitives like poll/epoll/IOCP/etc waiting for these inputs.
Operating systems don't generate mouse events at 1kHz unless you actually move the mouse.
Other possibility, do you have a gaming mouse with 1000Hz polling rate configured?
What I've seen is that you need people who deeply understand the system (e.g. Spark) to be able to tune for these edge cases (e.g. see  for examples of some of the tradeoffs between different processing schemes). Those people are expensive (think $500k+ annual salaries) and are really only cost effective when your compute spend is in the tens of millions or higher annually. Everyone else is using open source and throwing more compute at the problem or relying on their data scientists/data engineers to figure out what magic knob to turn.
That being said, Spark is literally the only (relatively) easy way to run distributed ML that's open source. The competitors are GPU's (if you have a GPU friendly problem) and running multiple Python processes across the network.
(I'm really hoping that people will now school me, and I'll discover a much better way in the comments).
(I'm totally speculating, but your story seems so true that it inspired me :-)
I have now seen this anti pattern in multiple places now
This is interesting. Can you elaborate a bit?
Furthermore, pyspark is by far the most popular and used spark, and it’s also got the absolute world-worst atrocious mechanical sympathy. Why?
Developer velocity trumps compute velocity any day?
(I want the niceness of python and the performance of eg firebolt. Why must I pick?)
(There is a general thing to get spark “off heap” and use generic query compute on the spark sql space, but it is miles behind those who start off there)
If it had an SQL layer, I'd spend time evangelising it, but it's not worth learning another query language for.
There exists a world where it got open-sourced before Hadoop was built, and in that world it's probably everywhere.
We had a system management backend at my last company. Loading the users list was unbearably slow; 10+ seconds on a warm cache. Not too terrible, except that most user management tasks required a page reload, so it was just wildly infuriating.
Eventually I took a look at the code for the page, which queried LDAP for user data and the database for permissions data. It did:
get list of users
get list of all permissions
filter down to the ones assigned directly to the user
get list of all groups
get list of all permissions
filter down to the ones assigned to the group
filter down to the ones the user has
I replaced it with
get list of all users
get list of all groups
get list of all permissions
Also worth noting was that fetching the user list required shelling out to a command (a python script) which shelled out to a command (ldapsearch), and the whole system was a nightmare. There were also dozens of pages where almost no processing was done in the view, but a bunch of objects with lazy-loaded properties were passed into the template and always used, so when benchmarking you'd get 0.01 seconds for the entire function and then 233 seconds for "return render(...)' because for every single row in the database (dozens or hundreds) the template would access a property that would trigger another SQL call to the backend, rather than just doing one giant "SELECT ALL THE THINGS" and hammering it out that way.
Note that we also weren't using Django's foreign keys support, so we couldn't even tell Django to "fetch everything non-lazily" because it had no idea.
If that app were written right it could have run on a Raspberry Pi 2, but instead there was no amount of cores that could have sped it up.
In the case of groups and permissions there's probably only a few of each, so fetching all of them is probably fine. But depending on your data -- say you're fetching comments written by a subset of users, you can tweak the above to use IN filtering, something like this Python-ish code:
users = select('SELECT id, name FROM users WHERE id IN $1', user_ids)
comments = select('SELECT user_id, text FROM comments WHERE user_id IN $1', user_ids)
comments_by_user_id = defaultdict(list)
for c in comments:
for u in users:
u.comments = comments_by_user_id[u.id]
For development, we had a ?queries=1 query parameter you could add to the URL to show the number of SQL queries and their total time at the bottom of the page. Very helpful when trying to optimize this stuff. "Why is this page doing 350 queries totalling 5 seconds? Oops, I must have an N+1 query issue!"
users = User.objects.all()
for u in user:
users = select('SELECT id, name FROM users WHERE id IN $1', user_ids)
for u in user:
num_comments = get('SELECT COUNT(*) FROM comments WHERE user_id = $1', u.id)
The other thing I don't like about (most) ORMs is they fetch all the columns by default, even if the code only uses one or two of them. I know most ORMs provide a way to explicitly specify the columns you want, but the easy/simple default is to fetch them all.
I get the value ORMs provide: save a lot of boilerplate and give you nice classes with methods for your tables. I wonder if there's a middle ground where you couldn't do obviously bad things with the ORM without explicitly opting into them. Or even just a heuristic mode for development where it yelled loudly if it detected what looked like an N+1 issue or other query inside a loop.
https://github.com/django-query-profiler/django-query-profil... has a neat option for detecting likely N+1 queries. I usually use the Django Debug Toolbar for this.
Django's ".only()" method lets you specify just the columns you want to retrieve - with the downside that any additional property access can trigger another SQL query. I thought I'd seen code somewhere that can turn those into errors but I'm failing to dig it up again now.
I've used the assertNumQueries() assertion in tests to guard against future changes that accidentally increase the number of queries being made without me intending that.
The points you raise are valid, but there are various levels of mitigations for them. Always room for improvement though!
users = User.objects.all()
for u in user:
Django ORM has a succinct way of doing the "SELECT COUNT(*)" pattern:
users = User.objects.all()
for u in user:
users = User.objects.annotate(n_comments=Count("comments"))
for u in user:
And this is how you end up with the problems the parent is describing. During testing and when you setup the system you always have a small dataset so it appears to work fine. But when it’s real work the system collapses.
> SELECT user_id, COUNT(*) FROM comments WHERE user_id IN $1 GROUP BY user_id
Fwiw, ActiveRecord with the Bullet gem does exactly that. I'd guess there's an equivalent for Django.
SqlAlchemy has this as part of the ORM, it should really be part of Django IMO.
It's one of those things that in the long run would have been much more time effective to write, but the debugging never quite took long enough each time to make me take the time.
I'm so glad Symfony (PHP framework) has a built-in profiler and analysis tooling there...
For SQL you can also do a stored procedure. Sometimes that works well if you are good at your DBMS's procedure language and the schema is good.
With either technique you are still pulling all the data you need from the DB but with multiple queries instead of a stored procedure you are usually pulling more data than you need with each query and then dropping any rows or fields you’re not interested in. Together with multiple calls over the network to the DB server and (often) multiple SQL connection setups this is much worse for performance on both the web and database servers
Lots of people seem to not realize that db roundtrips are expensive, and should be avoided whenever possible.
One of the best illustrations of this I've found is in Transaction Processing book by
Jim Gray and Andreas Reuters where they illustrate the relative cost of getting data from CPU vs CPU cache vs RAM vs cross host query.
for each user in (get_freeipa_users | grep_attribute uid):
email = (get_freeipa_users | client_side_find user | grep_attribute email)
last_change = (get_freeipa_users | client_side_find user | grep_attribute krblastpwdchange)
expiration = (get_freeipa_users | client_side_find user | grep_attribute krbpasswordexpiration)
# Some slightly incorrect date math...
- it's memory efficient
- it's atomic
- it's faster
Also doesn't LDAP support filtering in query?
It turned out that the dashboard had been built on top of Wordpress. The way that it checked if the user had permission to access the dashboard was to query all users, join the meta table which held the permission as a serialized object, run a full text search to check which users had permission to access this page, and return the list of all users with permission to access the page. Then, it checked if the current user was in that list.
I switched it to only check permissions for the current user, and the page loaded instantaneously.
Everything else are network hops and what I call "distributed clutter", including authorizing via a third party like Auth0 multiple times for machine-to-machine token (because "zero trust"!), multiple parameter store calls, hitting a dcache, if interacting with a serverless function, cold starts, API gateway latency, etc...
So for the meat of a 20-40 ms call, we get about a 400ms-2s backend response time.
But DevOps will say "but my managed services and infinite scalability!"
In that case, was there a reason joins couldn't be used? As it still seems pretty wasteful (and less performant) to load all of this data in memory and post-process; whereas a well-indexed database could possibly do it faster and with less-memory usage.
The mistake of course was not thinking about why this approach is faster in a database query and that it doesn't work that way when you already need to get all the data out of LDAP to do anything with it.
I'm working and company which process "real" exchanges, like NASDAQ, LSE, and, especially, OPRA feed.
We've added 20+ crypto exchanges in our portfolio this year, and all of them are processed on one old server which is unable to process NASDAQ Total View in real-time anymore.
On the other hand, whole OPRA feed (more than 5Gbit/s or 65B/day, yes, it is billions, messages of very optimized binary protocol, not this crappy JSON) is processed by our code on one modern server. Nothing special, two sockets of Intel Xeons (not even Platinums).
I mean I'm still going to use it for client/server communication and the like because I don't have serious performance constraints enough to warrant something that will be more difficult to develop for etc, but still.
One, that he’s surprised by how small crypto markets are.
Two, that this one server (or very few server processing) thing scales quite well to billions of messages a day.
I didn’t find any element of smugness here, but maybe I misread the tone.
The thing is - when two engineers get smug, oftentimes lots of fairly interesting technical details get exchanged, so such discussions aren't really useless to bystanders.
JSON is very inefficient both in bytes (32 bit price is 4 bytes in binary and could be 7+ bytes as string, think "1299.99" for example) and CPU: to parse "1299.99" you need burn a lot of cycles, and if it is number of cents stored as native 4-byte number you need 3 shifts and 4 binary ors at most, if you need to change endianness, and in most cases it is simple memory copy of 4 bytes, 1-2 CPU cycle.
When you have binary protocol, you could skip fields which you are not interested in as simple as "offset = offset + <filed-size>" (where <filed-size> is compile-time constant!) and in JSON you need to parse whole thing anyway.
Difference between converting binary packet to internal data structure and parsing JSON with same data to same structure could be ten-fold easily, and you need to be very creative to parse JSON without additional memory allocations (it is possible, but code becomes very dirty and fragile), and memory allocation and/or deallocation costs a lot, both in GC languages and languages with manual memory management.
This is typical (from NASDAQ http://www.nasdaqtrader.com/content/technicalsupport/specifi... ):
Prices are integer fields. When converted to a decimal format, prices are in fixed point format with 6 whole number places followed by 4 decimal digits. The maximum price in OUCH 4.2 is $199,999.9900 (decimal, 7735939C hex). When entering market orders for a cross, use the special price of $214,748.3647 (decimal, 7FFFFFFF hex).
For NASDAQ it seems to have been something around 430k / share... Buffett's BRK shares threatened to hit that limit a couple months ago: https://news.ycombinator.com/item?id=27044044
X-Stream feeds do for example
If you're optimizing for latency JSON is pretty terrible, but most people who use it are optimizing for interoperability and ease of development. It works just fine for that, and you can recover decent bandwidth just by compressing it.
Most big USA exchanges uses custom fixed-layout protocols, where each message is described in documentation, but not in machine-readable way. European ones still use FAST.
I didn't seen FIX in the wild for data feeds, but it is used for brokers, to submit orders to exchange (our company didn't do this part, we only consume feeds).
I don't know why, but all Crypto Exchanges use JSON, not protobufs or something like this, and didn't publish any formal schemes.
Fun fact: one crypto exchange put GZIP'ed and base64'ed JSON data into JSON which pushed to websocket, to save bandwidth. IMHO, it is peak of bad design.
FAST is not particularly common in Europe
the large European venues use fixed-width binary encoding (LSE group, Euronext, CBOE Europe)
Good protobuf vs msgpack comparison: https://medium.com/@hugovs/the-need-for-speed-experimenting-...
It served up to 70K of subscribers, call center with 30-40 employees, payment systems integration, everything.
Next was 8 socket Intel server. We were never able to saturate it's CPUs - 300 Mhz (or was it 400 ?) bus was a stopper. It served 350-400K of subscribers.
And next: we changed architecture and used 2 servers with 2 socket Intel CPUs again but that was time when Ghz frequencies appeared on market. We dreamed about 4xAMD server. We came to ~1 mln of active subscribers.
Nowadays: every phone has more power than it was those servers.
Typical react application consumes more resources than billing system.
Gigabyte here, gigabyte there - nobody counts them.
/grumpy oldster mode
OTOH a service loading the single core with the main thread is a frequent sight :( Interpreted languages like Python can easily spend 30% of time just on the deserialization overhead, converting the data from a DB into a result set, and then into ORM instances.
The trick is basically that you have to eschew the last 15 years of "productivity" enhancements. Pretty much any dynamic language is out; if you must use the JVM or .NET, store as much as possible in flat buffers of primitive types. I ended up converting order books from the obvious representation (hashtable mapping prices to a list of Order structs) to a pair of SortedMaps from FastUtils, which provides an unboxed float representation with no pointers. That change ended up reducing memory usage by about 4x.
You can fit a lot of ints and floats in today's 100G+ machines, way more than needed to represent the entire cryptocurrency market. You just can't do that when you're chasing 3 pointers, each with their associated object headers, to store 4 bytes.
Does the $4K include the cost of the RAM? Where can I find these servers? Thanks!
To answer your question, you can't find these servers because they don't exist. A server with 4T of RAM will cost you at a minimum $20,000 and that will be for some really crappy low-grade RAM. Realistically for an actual server that one would use in an actual semi-production setting, you're looking at a minimum of $35,000 for 4TB of RAM and that's just for the RAM alone, although to be fair that 35k ends up dominating the cost of the entire system.
Which is a far cry from the claimed 4TB, but still, damn.
I don't think they're typically making things up. It's what I prefer to call Reddit knowledge. They saw someone else claim it somewhere, they believed it, and so they're repeating it so they can be part of the conversation (people like to belong, like to be part of, like to join). It's an extremely common process on Reddit and most forums, and HN isn't immune to it. Most people don't read much and don't acquire most of their knowledge from high quality sources, their (thought to be correct) wider knowledge - on diverse topics they have no specialization on - is frequently acquired from what other people say and that they believe. So they flip around on Reddit or Twitter for a bit, digest a few nuggets of questionable 'knowledge' and then regurgitate it at some later point, in a process of wanting to participate and belong socially. It's how political talking points function for example, passed down to mimic distributors that spread the gospel to other mimic followers (usually without questioning). It's how religion functions. And it's how most teachers / teaching functions, the teachers are distribution mimics (mimics with a bullhorn, granted authority by other mimics to keep the system going, to clone).
It's because some very high percentage of all of humans are mimics. It's not something Reddit caused of course, it's biology, it's a behavior that has always been part of humanity. It's an increased odds of success method of optimizing for survival of the species, successful outcomes, meets the Internet age. It's why most people are inherent followers, and can never be (nor desire to be) leaders. It's why few people create anything original or even attempt to across a lifetime. It's why such a small fraction of the population are very artistic, particularly drawn to that level of creative expression. If you're a mimic biologically it's very difficult to be the opposite. This seems to be viewed by most people as an insult (understandably, as mimics are the vast majority of the population and control the vote), however it's not, it's simply how most living things function, system wise, by mimicry (or even more direct forms of copying). Humans aren't that special, we're not entirely distinct from all the other systems of animal behavior.
That saying, safety in numbers? That's what that is all about. Mimicry. Don't stand out.
The reason most Wall Street money managers can't beat the S&P 500? It's because they're particularly aggressive mimics, they intentionally copy eachother toward safe, very prosperous, gentle mediocrity. They play a game of follow, with popular trends (each decade or era on Wall Street has popular trends/fads). Don't drift too far below the other mimics and it's a golden ticket.
Nobody got fired for buying IBM? Same thing. Mimic what has worked well for many others is biologically typically a high success outcome pattern (although amusingly not always, it can also in rare occasions lead off a cliff).
The Taliban? The Soviet Union? Nazism? Genocide? Multi generational patterns of mistake repetition passed down from parental units? That's how you get that. People mimic (perhaps especially parental units; biology very much in action), even in cases where it's an unsuccessful/negative pattern. All bad (and good) ideologies have mimic distributors and mimic followers, the followers do what they're told and implement as they're told. And usually there are only a very small number of originators, which is where the mimic distributors get their material.
The concept of positive role models? It's about mimicry toward successful outcomes.
HN doesn't look exactly like SlashDot, but it's absolutely just like SlashDot.
Intelligent discourse by knowledgeable persons.
The omission of GNAA and "frosty" posts are a massive boon.
Looking on ebay I can find some pretty decent R820 with 512GB each for right around $1500 a piece. Not counting any storage, even if they come used with some spinning hard drives, would end up replacing with SSDs. So more like three servers, 1.5TB of RAM, for $4500.
My data sets are far too big to fit into memory/cache. Disk pressure can be alleviated by optimizing queries but it's a game of whack-a-mole.
I have exhausted EBS i/o and been forced to resort to dirty tricks. With RDS you can just pay more but that only scales to a point – normally the budget.
This reminds me of back in 2003, a friend of mine worked for an online casino vendor; basically, if you wanted to run an online casino, you'd buy the software from a company and customize it to fit your theme.
They were often written in Java, ASP.NET, and so on. They were extremely heavyweight. They'd need 8-10 servers for 10k users. They hogged huge amounts of RAM.
My friend wrote the one this company was selling in C. Not even C++, mind you, just C. The game modules were chosen at compile time, so unwanted games didn't exist. The entire binary (as in, 100% of the code) compiled to just over 3 MB when stripped. He could handle 10k concurrent users on one single-core server.
I'm never gonna stop writing things in Python, but it still amazes me what can happen when you get down close to the metal.
Of course, a lot of it depends on what your app does for each request but most apps are simple enough and can live with being a monolith / single fat binary running on a single instance.
The problem with today's DevOps culture is that they present K8's as answers for everything. Instead of defining a clear line on when to use them and when not to.
Codebase was pure server-side Kotlin running on the JVM. Jackson for JSON parsing, when the exchange didn't provide their own client library (I used the native client libraries when they did). Think I used Undertow for exchange websockets, and Jetty for webserving & client websockets. Postgres for DB.
The threading model was actually the biggest bottleneck, and took a few tries to get right. I did JSON parsing and conversion to a common representation on the incoming IO thread. Then everything would get dumped into a big producer/consumer queue, and picked up by a per-CPU threadpool. Main thread handled price normalization (many crypto assets don't trade in USD, so you have to convert through BTC/ETH/USDT to get dollar prices), order book update, volume computations, opportunity detection, and other business logic. It also compared timestamps on incoming messages, and each new second, it'd aggregate the messages for that second (I only cared about historical data on a 1s basis) and hand them off to a separate DB thread. DB would do a big bulk insert every second; this is how I kept database writes below Postgres's QPS limit. Client websocket connections were handled internally within Jetty, which I think uses a threadpool and NIO.
Key architectural principles were 1) do everything in RAM - the RDS machine was the only one that touched disk, and writes to it were strictly throttled 2) throw away data as soon as you're done with it - I had a bunch of OOM issues by trying to put unparsed messages in the main producer/consumer queue rather than parsing and discarding them 3) aggregate & compute early - keep final requirements in mind and don't save raw data you don't need 4) separate blocking and non-blocking activities on different threads, preferring non-blocking whenever possible and 5) limit threads to only those activities that are actively doing work.
For that particular use-case (or related financial ones) I'd consider Rust, which was a bit too immature when I was working on this but would give you some extra speed. HFT is winner-take-all and the bar has risen significantly even in the last couple years, so if I were putting my own money at risk now I'd want the absolute fastest processing possible.
The real-time data was visualized on a website. Here is an example.
I'm guessing if you put all this data into Kinesis or message queues it would end up costing quite a bit more.
If you do it individually, there are public developer docs for each exchange that explain how their API works. It's generally free as long as you're not making a large number of active trades.
They're rent seeking in other ways though, no worries.
If you’re anywhere in the US, let me know.