Hacker News new | past | comments | ask | show | jobs | submit login

I'm curious - what are people running on boxes like these that makes good use of 80 cores and 768 GB of RAM?



A single electron application. Pick any of them.



Isn't that single threaded?


Nope! Two threads at least are used.


Sure but these days you need to run many of them for different apps :D


Do you think it will run Doom?


one instance of doom compiled with wasm on electron, but only if you lower the resolution to get it to run smooth


Hacker News: Desktop Edition. Keeps an open tab for every submission favorited. ;-)


Low-latency information retrieval for heavy-read/low-write content sets.

An example that might solidify the idea: pack a Wikipedia snapshot into it for search, and serve ~1m queries per second on it (12k qps per core).


Nice.


I worked on a tool that processes a large-ish dataset using only data structures in memory, since this is much faster and simpler than using a system like Spark for example. Not only that, the nature of the processing algorithm (a reduce, in effect) makes it kind of pointless to run on a cluster of nodes.

The dataset is 2-3B records with 5-12 64-bit values each, stored in a few dozen files using the Apache Arrow format. If we take the midpoint of this range that's 170 GB just with raw data. With the overhead of data structures, I was running the process with ~400 GiB of RAM and could have done more on a beefier machine.

It took about 20-30 minutes to run the full algorithm on these tens of billions of data points and this approach was perfect for this use case. No overhead of Spark and all of its dependencies, just one program, a bunch of input files, and it's done when I get back from lunch.


Most people have "medium" data problems, just like you, not "big data" problems that practically speaking only the FAANGS have.


curious how would it have performed if you loaded it all in a SQLIte database and instead ran a SQL query ? If the B-tree structure used by SQL was small enough to fit in memory it should still be fast I assume


Microsoft Teams?


> good use


Close the thread, you win.


This is just hitting the market so it will be interesting to see where it goes.

If the vendors were ready for something like this on the software side, this would be great for edge compute when low latency response is required - remote utility substation handling and reacting to a large array of sensors feeding at 60 data points per second. In some use cases going to the control centre and back would be too slow to benefit. Basic grid control is well handled, but I could see optimizations benefiting from this. Vendors and utilities are way behind on this though.


With four times 10Ge, so many cores and memory I can imagine this is perfect for web hosting or virtual machines.


That will only help if every NIC has about 20 TX queues or so. If it cant utilize the cores, or the driver or app cant then all those cores won’t help.


Oracle Cloud provides 4 Altra Cores, 24GB of RAM, and 200GB of storage for free (supposedly indefinitely). I use it for a Minecraft server. Handles ~15 players with a half-dozen plugins without players complaining about any lag. I only use 4GB of the RAM because Java's Garbage Collector - and Minecraft is heavily single-threaded so I'm probably not using all cores very effectively, but it's free and works.


Fastly has published their server specs:

2 Intel(R) Xeon(R) CPU E5-2690 @ 2.90GHz

768 GB of RAM (384 GB per Processor)

18 TB of SSD Storage (Intel 3 Series or Samsung 840 Pro Enterprise Series)


I was just looking for that as an example, remembered they had something akin to this!


I'd throw some video encoding work at something like that. Easy enough to eat up all that ram and the cores.


https://chipsandcheese.com/2021/08/05/neoverse-n1-vs-zen-2-a...

No. This core is terrible at encoding.

EDIT: And encoding is limited to ~16 cores in practice. It seems like after that, the communication between threads get too much to be useful. Unless you plan to be doing 5-simultaneous encodings at a time, then you're gonna have to find something else to do with all those cores.


Most of the new encoding tools split videos by scene and then run parallel encodes from there, such as av1an.

For a decently sized video (say a TV episode) there's usually like 100 split points to divy out to encoders.

https://github.com/master-of-zen/Av1an


Is there a reason that 175W of processing power on 80 small cores at 2 GHz would be faster than, for example, an AMD EPYC 7F32, which has a similar TDP of 180W and 8 cores with 2 threads each that run at ~4 GHz?

Naively, assuming identical instruction sets (I know they're not), 16 threads at 4 GHz is less than half as good as 80 at 2 GHz. But that can't be the whole story.


AVX2 (256-bit SIMD instructions) is huge in the encoding world. A lot of these encoding algorithms operate over reasonable block sizes (8x8 macroblocks) that ensure that SIMD-instruction sets benefit greatly.

ARM only has 128-bit SIMD through NEON. Its reasonably well designed, but nothing beats the brute force of just doing 256-bits at a time (or 512-bits in the case of Intel's AVX512)


I would use it for CI runners.


In my case, Houdini simulations for my vfx hobby ("only" 64 cores and 128gb RAM, though).


I think the target is low power consumption server applications.


Based on the spec and the form-factor I assume an oven.


might be nice in a render farm?


Not at 2.80 GHz it isn't.


Minecraft servers? ;-)


Oracle Cloud provides 4 Altra Cores and 24GB of RAM for free. I can support ~15 players with a half-dozen plugins without players complaining of any lag. Minecraft is very single-threaded though and I'm only using 4GB of the RAM because of Java's Garbage Collector - but it does work and it's supposedly free indefinitely.


moves goal post of the term good




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: