Hacker News new | past | comments | ask | show | jobs | submit login
IBM Power System E980 – Up to 64TB RAM per node [pdf] (ibm.com)
59 points by peter_d_sherman 26 days ago | hide | past | favorite | 37 comments



The redbooks are hidden gems - what documentation should be. When I used to work with these (very nice) bits of kit, the redbooks were my bible.

They'd tell you the gory details, you could learn from the experiences of the authors. The "official" documentation was good, but wouldn't ever say "you don't want to do it this way, it'll end in tears".

PS: Note that the book is 2 years old now. And my experience with them closer to 10 ;-)


It's hard to convey how different acquiring knowledge used to be in the old days.

There was so much less information .. no Google, no online groups, and certainly no stack overflow .. but red books were true compediums of hard won knowledge.

Today the core skill is being able to drive Google, to quickly sift search results, to parse through page after page of shitty low quality cruft to find the nuggets of knowledge you need.

Back in the day, you could read a red book on a plane flight and land with a greatly enhanced foundational understanding of the subject.


(Page 3:)

>"The Power E980 server provides the following hardware and components and software features:

Up to 192 POWER9 processor cores.

Up to 64 TB memory.

Up to 32 Peripheral Component Interconnect Express (PCIe) Gen4 x16 slots in system nodes.

Initially, up to 48 PCIe Gen3 slots with four expansion drawers. This increases to 192 slots with the support of 16 I/O drawers.

Up to over 4,000 directly attached SAS disks or solid-state drives (SSDs).

Up to 1,000 virtual machines (VMs) (logical partitions (LPARs)) per system."


They should definitely offer windowed cases, lots of colored LEDs, and neonz.


You joke, but supercomputers often have elaborate, decorative racks and vinyl decals.


Cray fans know that if it doesn't have padded seating all around, it's not a supercomputer.


Top end POWER servers like this tend to come in customised racks, and while they don't have windows, a filled-in system will have a lot of status leds visible. The racks themselves are also quite stylish, not just simple box, at least in my memories from dealing with p590 and seeing many other models later on.


Isn't there a story about how a supercomputer used a macbook just to display a nice the logo on the door - and for nothig else?


as I recall a reporter quipped to Seymour Cray that Apple uses a cray to help design the mac to which Dr Cray replied he uses a mac to design his supercomputers



It doesn't mean anyone will take any notice (or that it's feasible to get data in and out). Long ago, supporting crystallography, I pointed out that each image would roughly fit in the cache, and the whole dataset in memory. (On Alphas with ~10MB and 1GiB, I think -- though the storage array was only ~250GB.) The developer of the processing program still didn't see fit to replace the disk-based sort written for a PDP11, which was where the bottleneck was.


64 TB is all that that anybody would ever need on a computer.


No, that's 640.


...in fairness, I do struggle to think what you could ever do to exceed 640TB of RAM, other than some very big scientific/simulation things (or SAP or Google Chrome;]). I mean, that's more persistent storage than I expect most people to need for at least another decade (or at least whenever virtual reality really takes off), let RAM.


If you record your live at 4K compressed, that’s ballpark 25 Mbit/s ≈ 3 MB/s≈ 250 GB/day, so you would run out of RAM in 8 years or so (more like 12 years if you compress the nightly video better). I fear there are people alive today who will ‘need’ that.

Alternatively, slap a 4K video camera on all 75k spectators in a large stadium, and you’ll fill that space before the match is over.

Also, the LHC could generate 25 GB/s (https://home.cern/science/computing/processing-what-record). They could easily fill it in a day (but don’t. They discard most of the data from their instruments in milliseconds)


If my math is close enough, 640TB is 640kB * 1 billion. As we know, 640kB should be enough memory for anyone, but the world population is a bit more, so you'll need more ram to have enough for everyone.


A sufficiently complex immersive VR with enough resolution could take up that kind of memory, but it's very much in the realm of science fiction right now.

When I'm dead and my mind is uploaded to one, a machine like this may prove comfortable.


> a machine like this may prove comfortable

Future virtual real estate listing: "Spacious 640TB expanse gives you room to let your mind wander. Large enough for two entities to share and even spawn subprocesses!"


I've seen databases as large as 10 petabytes and blob storage services in the 10s of exabyte range. Granted, that was all disk but it'd be cool if you could store it all in ram on one machine.


You got the joke! ;-)


IIRC, you can get E980-based VMs on IBM cloud running AIX or i. Also, IIRC, under PowerVM, these cores run 4 threads each.


People here seem to be struggling for what to do with 64TB of memory.

The obvious ones are fluid dynamics, weather simulations, n-body problems, and simulation of quantum systems.

I know that Capital One gave a talk about they had an in memory anomaly detection system for fraudulent transactions which was under 1 Terabyte.

A less obvious one would be to accelerate trading of NLP systems like training GPT-3. You need the memory to not only keep the dataset and model you are training in memory. NVIDIA seems to have this idea:

https://top500.org/system/179397/

Also, you could clean data by just loading all of the data that you collected in RAM and then cleaning it instead of swapping to disk often. Never underestimate the power of just buying ram.


> The obvious ones are fluid dynamics, weather simulations, n-body problems, and simulation of quantum systems.

Those are typically run on distributed memory; the question should probably have been about shared memory. They're likely to be memory bound, depending on the application, but it's not obvious that shared memory in ~100 cores would win. Then consider checkpointing, for instance. (In my experience it's bioscientists who think they need huge shared memory but may well not, or couldn't feed it fast enough anyhow.)

Recent top supercomputers have had ~1PB or more total memory, though that's pretty variable, and there probably won't be many jobs using a large fraction of the machine. They may also (or only) have HBM.


>People here seem to be struggling for what to do with 64TB of memory.

Ha you think! I'am a ZFS user.


> Most implementations of transactional memory are based on software. The POWER9 processor-based systems provide a hardware-based implementation of transactional memory that is more efficient than the software implementations and requires no interaction with the processor core, therefore enabling the system to operate at maximum performance.

Wow. You'd need a compiler that supports this feature (I think I read that GCC 4.7 has experimental support). This would support their traditional customer base - banks and other firms that want atomic operations at speed. There's probably some limitations around transactions across nodes, since you'd be leaving the chassis.


Such an amazing machine. If I had a too much money and time on my hands I'd put one of these systems into a "pocket" machine room (one cold, one hot aisle). The ideas you could explore would be pretty amazing.


NetBSD wasn't mentioned so I am left wondering if it runs on this "toaster."


Of course it runs NetBSD.


Funnily enough, POWER is one of the notable platforms NetBSD has basically no support for at all. NetBSD will run on 32 bit PowerPC Macs, but only those. No support for 64 bit kernels or userlands, and only the most bare bones support for early 32 bit OF power systems.

OpenBSD recently added support for the PowerNV/OpenPOWER platform, could probably use their locore to bootstrap a NetBSD system. Still wouldn't run on a PowerVM system like the E980 :)


Who is the typical customer for this product, and to what application is it usually put?


These guys are analyzing one use case where you can sidestep a clustered Oracle pricing peak. If you can cram your whole thing in memory you have lower latency from disk and also on slow remote DB queries and smaller Larry bill.

(zoom the pictures for details)

https://precisionitinc.com/project/sample-total-cost-of-owne...


insane that oracle DB for that many cores costs 7M USD



Web developers running RoR


Huh. Isn't it a lot more expensive to do that on E980s that it would be on commodity servers using commodity Intel or AMD CPUs?


They're making a joke that Ruby on Rails takes a lot of memory


Think in-memory databases




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: