They'd tell you the gory details, you could learn from the experiences of the authors. The "official" documentation was good, but wouldn't ever say "you don't want to do it this way, it'll end in tears".
PS: Note that the book is 2 years old now. And my experience with them closer to 10 ;-)
There was so much less information .. no Google, no online groups, and certainly no stack overflow .. but red books were true compediums of hard won knowledge.
Today the core skill is being able to drive Google, to quickly sift search results, to parse through page after page of shitty low quality cruft to find the nuggets of knowledge you need.
Back in the day, you could read a red book on a plane flight and land with a greatly enhanced foundational understanding of the subject.
>"The Power E980 server provides the following hardware and components and software
Up to 192 POWER9 processor cores.
Up to 64 TB memory.
Up to 32 Peripheral Component Interconnect Express (PCIe) Gen4 x16 slots in system
Initially, up to 48 PCIe Gen3 slots with four expansion drawers. This increases to 192 slots
with the support of 16 I/O drawers.
Up to over 4,000 directly attached SAS disks or solid-state drives (SSDs).
Up to 1,000 virtual machines (VMs) (logical partitions (LPARs)) per system."
Alternatively, slap a 4K video camera on all 75k spectators in a large stadium, and you’ll fill that space before the match is over.
Also, the LHC could generate 25 GB/s (https://home.cern/science/computing/processing-what-record). They could easily fill it in a day (but don’t. They discard most of the data from their instruments in milliseconds)
When I'm dead and my mind is uploaded to one, a machine like this may prove comfortable.
Future virtual real estate listing: "Spacious 640TB expanse gives you room to let your mind wander. Large enough for two entities to share and even spawn subprocesses!"
The obvious ones are fluid dynamics, weather simulations, n-body problems, and simulation of quantum systems.
I know that Capital One gave a talk about they had an in memory anomaly detection system for fraudulent transactions which was under 1 Terabyte.
A less obvious one would be to accelerate trading of NLP systems like training GPT-3. You need the memory to not only keep the dataset and model you are training in memory. NVIDIA seems to have this idea:
Also, you could clean data by just loading all of the data that you collected in RAM and then cleaning it instead of swapping to disk often. Never underestimate the power of just buying ram.
Those are typically run on distributed memory; the question should probably have been about shared memory. They're likely to be memory bound, depending on the application, but it's not obvious that shared memory in ~100 cores would win. Then consider checkpointing, for instance. (In my experience it's bioscientists who think they need huge shared memory but may well not, or couldn't feed it fast enough anyhow.)
Recent top supercomputers have had ~1PB or more total memory, though that's pretty variable, and there probably won't be many jobs using a large fraction of the machine. They may also (or only) have HBM.
Ha you think! I'am a ZFS user.
Wow. You'd need a compiler that supports this feature (I think I read that GCC 4.7 has experimental support). This would support their traditional customer base - banks and other firms that want atomic operations at speed. There's probably some limitations around transactions across nodes, since you'd be leaving the chassis.
OpenBSD recently added support for the PowerNV/OpenPOWER platform, could probably use their locore to bootstrap a NetBSD system. Still wouldn't run on a PowerVM system like the E980 :)
(zoom the pictures for details)