PPC was ok, the killer was that you had to write code specifically for the Cell (co)processors and their limited memory addressing if you wanted the promised compute performance.
Location: Vancouver, Canada
Remote: Yes (done it before), or On-Site
Willing to relocate: No
Technologies:
C/C++, Python/Ruby, Erlang/Elixir, Java, and HTML/CSS/JavaScript
MySQL/PostgreSQL, Redis, BigQuery, BigTable, ElasticSearch, RabbitMQ
Win32/Linux/BSD, AWS/GCP, Docker/Kubernetes, Scrapy/numpy/pandas
OpenGL, Direct3D 11, WebAssembly
Résumé/CV: https://mtwilliams.io/#cv
Email: me@mtwilliams.io
Looking for systems engineering and backend engineering roles.
Experienced in building distributed systems that operate at scale.
Most recently, was the only data engineer for 15 million user video game that routinely saw 1,000-10,000+ req/s per microservice.
I really enjoy working on things that operate at scale (in users or data) or provide a lot of leverage for my teammates. I'm open to a lot of roles. Most recently I was the sole data engineer for several million MAU video game.
(I'm currently a Physics student, which means I can only commit ~30hrs/wk.)
Location: Vancouver, Canada
Remote: Yes (done it before), or On-Site
Willing to relocate: No
Technologies:
C/C++, Python/Ruby, Erlang/Elixir, Java, and HTML/CSS/JavaScript
MySQL/PostgreSQL, Redis, BigQuery, BigTable, ElasticSearch, RabbitMQ
Win32/Linux/BSD, AWS/GCP, Docker/Kubernetes
Résumé/CV: https://mtwilliams.io/#cv
Email: me@mtwilliams.io
> Latency kills: memory latency, network latency, and storage latency.
I think the point of the article is that
> memory latency
Is actually a hierarchy of its own, and non-linear depending on access patterns.
Instruction fetch latency, branch mis-predict latency, cache hit/miss latency (L1 through L3), cache pressure, register pressure, TLB hit/miss, VM pressure, etc. all become significant at scale.
Just knowing big-O/big-Ω characteristics aren't enough.
In essence, an interrupt storm[1] caused by a design flaw that resulted in the rendezvous radar reporting erroneous readings at a high frequency.[2] The computer couldn't handle all of this work on top of what it was already doing (too much work, too little time) so ultimately it had to prioritize and drop
some of the less important work.
Near realtime (say sub 1-minute) feeds can be leveraged to extract a lot of value out of the data you're collecting. Perhaps maybe not for your typical SaaS startup, but for free-to-play video games you end up leaving so much value on the table if you can't quickly (automatically) respond to spend or churn indicators.
I think a good rule of thumb for this is looking at how fast your decisions are made. If you're making decisions daily then refreshing your data every 24 hours is probably good enough. If you're making decisions every hour, then every 60 minutes is probably good enough.
Another factor is scale. When you're dealing with 6-figure CCUs and are trying to optimize conversion or retention through split-testing, figuring out which variants are anomalously poorly performing quickly can save you a whole lot of money.
I reckon there's at least a 50 titles that can benefit from streaming analytics, which immediately affects anywhere from 100-500 employees (analysts, engineers) and likely influences over 1000 (broader company). That's a significant portion of the field.
Logistics is your force multiplier. Sacrificing the (excessive) performance of 7.62 to improve the rounds you can field is well worth it. That said, 5.56 still holds up well in CQB, provided you have a sufficiently long barrel and a bullet with decent ballistics. But therein lies the problem. The move to shorter barrels means lower muzzle velocity which severely gimps a round that leans on velocity for its lethality. The move to shorter barrels has also necessitated the move to piston-operated guns like the HK416, but they don't come without negatives, especially when guys are over-gassing out of fear of stoppages.
It really wasn't all that great during the first couple years. And it still has a few annoying caveats today. The advertised flexibility (port sharing etc) most probably wasn't the only reason for http.sys back in the day :)
EDIT: that is not to say that it is as horrible to use as epoll (at least in multithreaded programs), but my favorite has to be kqueue.
It's a pretty useful attack vector since you can get an arbitrary program to load your payload under certain circumstances, so you don't even need malicious code running if you can find a vulnerable target. cough SharePoint cough
Yes -- you may not be able to convince a program to download a file, but you may be able to tell it to use an improperly sanitized plugin name via a static, non-executable document that someone downloads and tries to view.