Hacker News new | past | comments | ask | show | jobs | submit login
Want 1.5TB of RAM in Your PC? It Will Cost You (zdnet.com)
35 points by Khelouiati 45 days ago | hide | past | web | favorite | 61 comments

If you're an enterprise that needs a single CPU server with 1.5TB of RAM (to run SAP HANA or some other in-memory DB most likely) then the RAM cost, or really the entire hardware cost, is probably the smallest line item in the entire business case, more or less a rounding error. The author seems to be approaching this from the standpoint of someone building a home rig for some reason.

I have been buying 1Tb+ servers for years now, most recently to run Redis. $50k for a box is nothing when you consider we used to spend 10-20x that on Sun or SGI boxes.

Big memory PC servers [1] were at around 6 TB 4 years ago with 64 GB DIMMMS [2], 1.5 TB is not a lot today for a server.

[1] conventional single motherboard designs that is, larger of the top500 machines seems to be at the single petabytes now

[2] https://news.ycombinator.com/item?id=9582023

Someone with a surplus of personal income will need that much ram just to post their Plex server on https://old.reddit.com/r/homelab.

I've done some shopping around, and the best pricing I can find puts these 128GB RAM at about $1,500.

Each. Yes, each.

Some back of the envelope math -- or math in your head if you are good at that sort of thing -- put the price of 1.5TB of RAM at a cool $18,000.

If my memory serves me well, I routinely bought 1Mb RAM modules for 5.500 Spanish pesetas (33 Euro). This was around 1987-1990 I believe. If my math is right, that would be more than 4 million Euro for 128GB RAM.

In 1993, one of my first tasks in a new job as a sysadmin was to install 256MB of RAM into a DEC Alpha 3000/500. I remember the RAM cost on the order of $20k... I don't remember the exact price, but I remember thinking that the RAM was worth more than I was, and I'd better not screw it up.

graph of "Historical Cost of Computer Memory and Storage", https://jcmit.net/mem2015.htm . Memory data at https://jcmit.net/memoryprice.htm .

1994 prices are US$30.0/MB or $8K for 256MB of PC-grade memory. Looks like the DEC machine used 200-pin SIMMs, says https://www.memoryx.com/ms15da.html (listing 64MB for $200). Don't know how that compares.

BTW, $400 for a DEC Alpha 3000/500 at https://www.ebay.com/itm/DEC-DIGITAL-ALPHA-SERVER-3000-500-P... .

Definitely the time to be testing the ESD ground point first... And continuity for the wriststrap cord... In fact, better touch a grounded chassis at the same time!

Once adjusted for inflation, it's more like 7 millions.

To be fair, you have to factor in Moore's law. I'm not great at math, but I think you'd still end up with something like half a million. $18k for 1.5TB is still a steal.

I bought an 8MB RAM module at wholesale pricing in ~1993 for about $400, so that pricing seems a bit low to me, especially for the time window given.

An Arduino UNO comes with 2 kB of RAM installed and retails for about 20$. If you want to get 1.5 TB of RAM by connecting them (not accounting for any overhead on each controller) you would need 800 Million Arudino UNOs which would cost you about 16 Billion Dollars.

> An Arduino UNO comes with 2 kB of RAM installed and retails for about 20$.

This is not an accurate calculation, first, the main cost of an Arduino is its PCB and the supporting electronics, plus some profit margins to cover the overhead for the manufacturer.

Also, the RAM is not "installed", the actual device where the 2 KiB RAM resides in, is etched on the same silicon chip of the Atmega328p microcontroller. A 328p only costs $2 each, and close to $1.5 in huge quantity [0]. Since it's pretty much a standalone device, needs nothing (not even an external crystal) but power, some wirings and two decoupling capacitors to operate in a minimum system (and yes, even this configuration can run standard Arduino code). Let use a conservative number, $3, it would "only" cost you about 2.4 billion dollars.

Furthermore, Atmega328p is marketed at a "mid-end" 8-bit microcontroller offered by Atmel, mainly for its GPIOs. If you use an ATTiny1614, which is basically the same hardware with much fewer GPIO pins, it would only cost $0.7 for each, let's use $1 as a conservative number, then it would only cost you 0.8 billion dollars for 1.5 TiB of RAM, which should already be considered a very impressive number, since you couldn't even get a NAND gate, or even a MOSFET in the late 1970s for less than one buck. If adjusted for inflation, today's price would be as low as ~$0.17 per 1024 bytes.

And then you consider the fact that it's not only a RAM, but in fact a computer... Isn't it a miracle of Moore's Law?

[0] https://www.mouser.com/ProductDetail/Microchip-Technology-At...

The memory sticks we're talking about here also come on a PCB, so we should not exclude that cost. What use would loose chips be if they are not soldered onto a PCB? You could argue that there could be a supplier that produces working memory sticks for a cheaper price. I'm not contending that. These products, however, are unfamiliar to most people and Arduino UNO is a brand name that many people recognize. There are many cheap knockoffs of that product that may or may not be compatible. I would prefer to stay with respected manufacturers in this case.

It is true that those micro controllers are bundled with the memory. In other words, the memory also has computing power installed. A capability that some modern memory chips also include. You may argue that we should compare with those memory chips, however, in the case of micro controllers we cannot separate the memory from the computing unit and we need the computing unit for communication. Is the comparison thus unfair? Maybe.

It's a joke.

I thought the topic was unreasonable comparisons. I was wrong; it was nostalgia.

I should have compared with magnet core memory used in the Apollo program as an example. My gut tells me that we don't have enough rocket fuel to install that kind of memory in space.

Fun fact, it gets even more expensive (per GB) when you want to add RAM over 1.5TB, because Intel will crank up the cpu price just for higher ram support. Off the top of my head, the same cpu accepting 4TB was twice the price of the 1.5TB version, going at around $18000.

AMD's new epyc supports 4TB without artificial price increase, IMO some healthy competition for the server market.

Yeah people who need large RAM to compute will be buying lots of AMD.

I'd like to see how the flash-as-RAM devices work out for some of these apps, i.e. is it purely single-threaded random access or can it be batched/cached?

Is it just me, or is it becoming too commonplace for an article to have a video attached to it that is vaguely related? The video attached to this article is how to upgrade the mac mini to 32GB of ram. Interesting, but just left of adjacent in terms of how it's related to this article.

I have an even broader question: why do articles talking about some subject, have a random image representing the subject attached? For example, if an article talks about coffee, there will be a picture of a random cup of coffee or a random coffee tree field. I have never understood the point of it, but this has always existed in all forms of journalism.

For the print edition, the image is there to catch the eye of people that are interested in the general topic as they scan through the publication to determine what they want to read. I suspect it’s mostly vestigial for the one-article-per-webpage digital edition, but it still helps convey some branding in the particular choice and style of image.

I've worked with CMS where each entry pretty much required a "hero". Elsewhere in the system articles were thumbnailed as photo+headline, and attaching a random coffee.jpg caused less problems than trying to fix templates that expected photo+headline.

I asked me the same after starting watching the video because I couldn't believe that the article was really that short.

Watched the video genuinely interested thinking he was somehow going to put 1.5 TB RAM into Mac Mini.. until he said 32GB at 3:11.

I want to know why ECC RAM isn't more common. With the amount of RAM going into today's computers, it's insane not to protect it.

I only use it.

But "gamers" who like to overclock, and AMD users who use motherboards that don't generally support ECC don't care.

For professional work, I think you'd be crazy not to. You get 1 bit-flip per year per gigabyte. So for my 128GB system we use on desktop configurations and build servers we'd get 2 bit flips/week. Do you really want to send a build out to a customer that has a potential bit-flip in it?

It’s a market segmentation issue for Intel, which AMD has started undermining but has not yet frontally attacked - perhaps they still have plans to (ab)use it themselves.

ECC is exactly why my last system build was a Ryzen.

It costs something in terms of timing, power, complexity, and space overhead; software errors are by far the lower-hanging fruit for the overwhelming majority of systems.

tldr: 1.5TB costs $18,000

That's only about three times more expensive than if you'd extrapolate from regularly sized DIMM modules.

While each node of a server farm of these can be relatively cheap by enterprise budgets, an entire cluster of these machines is not (at $200k+ that’s the budget for an FTE, after all). Which is why I’d want to check if an Optane drive could be a strong consideration to shave some costs off for a reasonable trade-off in performance. Not sure if 10+ TB of Optane is viable enough to be at least an order of magnitude larger than the RAM capacity.

Also, 1.5 TB RAM became somewhat commonplace around 2014-ish. LinkedIn had to have Dell custom build them a 1 TB RAM machine around 2009. So 10 years later for the same build to be passé is fully reasonable when a lot of the market is doing massive or very latency sensitive graph calculations that won’t accept the variances of distributed algorithms.

Last I checked Optane still cost more than RAM up to about 2 TB/server (dimms or drives). It does scale significantly higher in drive form though.

Optane is at $5k for a 1.5 TB drive while the OP quotes $18k for 1.5 TB of RAM. https://www.intel.com/content/www/us/en/products/memory-stor...

Being able to reach 12 TB+ with something like Optane is much more likely than with RDIMMs though, and memory latencies would probably get insane enough that Optane would start looking viable anyway.

"It will cost you"

Well, less than any other time in history. Not sure I get the point here.

This is a use case where persistent memory really shines. Its about 10x slower but you can get 1.5TB or it for less than $10,000. You don’t have to use the persistence - you can treat it as RAM.

with VROC or the Threadripper equivalent NVME CPU RAID, you can get 7Gbps read speed from a RAID5. Thats DDR2 speed.

Unless you truly need DDR4 speeds, I would think you would be better off for your money using CPU Raid and NVME. More drives, more speed. One guy used 8 drives and got 28Gbps.


The throughput is amazing. The screenshot shows 28,375 MB/s — the capital B typically indicates Bytes whereas a lowercase b indicates bits, so your Gb/s translation is a little confusing. The latency is still much higher than main memory or Optane, but that might be fine for many applications.

youre right, I didnt look closely at the buttons, I just assumed bits based on the all caps used all over the interface.

Linear throughput is the wrong way to think about this. Random access is much more important for DRAM workloads. Persistent memory has a block size of 4 cache lines (256 bytes) vs 4KB for an SSD. This drastically cuts down on read/write amplification.

Generally burst access and not random is important. A few MB at a time.

in case you don't have $18,000 to buy those fancy brand new DDR4 RAM -

second hand 64GB DDR3 ECC REG RAM is about $100 each, a dual socket c602 motherboard such as my Z9PE-D16 has 16 memory slots, having 16 x 64GB = 1TB RAM in such a workstation costs you $1,600.


Does the Z9PE-D16 actually support 64gb DIMMs though? ASUS claims it only supports 32gb for a total of 512gb, but that could easily be because they didn’t update their marketing material rather than a technical limitation.

my understanding is that memory controllers are in the processors, motherboard just provide the slots, it shouldn't be a limiting factor to stop you using 64GB DIMMs.

Best place to pickup 2nd hand?

ebay.com taobao.com

1.5TB ram is grand and all, but the high end dell r9xx servers take a full 6 or 12TB for full load... fun note, the Xeon sp M models support the full ram...

Supermicro also has boards for up to 12TB DDR4... Even workstation boards that support 4TB.

I just want to know why anyone will personally need 1.5TB of ram in their personal PC today. I'm not saying that 128gb is all we need. I'm sure in the future, we might all have 32TB in our future portal device. But what sort of work load or app could you possibly need 1.5TB of ram for?

On my personal desktop right now. I have 20gb of ram and over 250tabs open on Firefox, spotify, VCode, Postgresql, pgadmin, multiple evice & fbreaders, rabbitMQ, some nodejs servers, 2 linux containers, 1 docker container, mysql server, 10+ xterm sessions, a few screen sessions and I'm using 14Gb.

I would love it for development (I work from home on personal HW) - multi-socket NUMA machines behave differently enough that being able to continually see how in-development software versions behave. And I just find it less productive / more painful to do detailed performance work remotely. Latency is one part, added time due to having to sync up code / build another, but there's more that I can't really pinpoint.

> I just want to know why anyone will personally need 1.5TB of ram in their personal PC today.

Then you can get rid of all disk reads for normal operations. I imagine gamers would absolutely love it, particularly if creating and populating a ramdisk were made really simple.

Seems the price hasn't come down much since 2015 (https://news.ycombinator.com/item?id=9582497)

From the experience with servers, having all DIMM slots populated(usually over two-thirds) clocks down the memory frequencies. Not something that you want for the maximum performance.

Its unclear what is this article about, yes large modules are expensive, but in DDR3 era 128gb dimms did not exist. You're paying the early adopter tax. If you want to get huge-ram systems for cheap, there are a lot of used 4-cpu servers, the whole system can be assembled for under $5k.

If you’re going to flip over the cost of RAM at the high end then you don’t want to hear about all the expensive proprietary stuff that clearly only costs pennies to make. I’m talking the special torque wrenches to remove motherboards or other chassis stuff.

It’s like Apples new Mac Pro and display. Those prices are just dialed that way to keep them out of home offices. It’s infuriating, yes, but prevalent.

I decided to put 32 GB of Ram in my home desktop some months back. I knew I didn’t need that much, but I figured macOS (it’s a Hackintosh) would use it for file caching or some such.

To my disappointment, it goes completely unused more often than not. According to Activity Monitor, I currently have around 7 gb just sitting idle.

Does Activity Monitor report the page cache/UBC size? Most of the time, basically all of the memory I'm not using directly is filled with page cache. The other big thing large memory typically does for me is delay GC in the JVM.

Typically the only time I use enough memory that my working files don't fit in the page cache along with the application memory, is when I build Android.

It's possible you just don't have that much on your disk that you use.

As of right now, 5.2 GB of RAM are being used for "Cached Files" according to Activity Monitor (still 25/32 GB in use overall).

In my experience windows utilizes ram much better. It will cache programs and data at times it anticipates you will use them. For example it will cache outlook at 9 am in the morning because that's the time I check my email.

At all times only a very small fraction of my ram is free.

I had a PC with 32mb back in 1997 when 16mb was he norm. Felt pretty nice. Now, not so much.

I've not owned one of those mini-macs. Was a bit surprised how it was put together from a servicing perspective. Way too much work to do something as simple as replacing the RAM, which I guess is what the intent was. I guess I should be more surprised it was not soldered to the mainboard.

Hardware is dirt cheap today. 256GB RAM fits into normal high end workstation.

The 2011 Mac Pro sitting under my desk has 128gb of RAM.

Can't get enough to keep Android Studio happy.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact