Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DDR5 RAM prices crashed by 20% in May (tech4gamers.com)
293 points by ddtaylor on June 4, 2022 | hide | past | favorite | 172 comments


For comparison, here's an Anandtech article from 2015 talking about falling DDR4 prices at roughly the same point in the adoption cycle: https://www.anandtech.com/show/9864/price-of-ddr4-memory-dro...


Glad to see DDR5 heading for more mainstream pricing. When I built a new machine to replace my well aged 4th gen, I was pleasantly surprised the 12th gen Intels supported both, so I’ve used nice high capacity DDR4 while prices come down, and in a year or two I’ll look at upgrading the motherboard and RAM to DDR5 and throw the older stuff into the NAS.


The cost of a new mobo and RAM can't be worth it for the minimal improvements surely?


It's very much workload dependent: https://www.anandtech.com/bench/product/2894?vs=2892


Depends what you do with it. If you run VM servers, it might be well worth it.


Personally I'm looking for 64 GB for a work laptop, lots of VMs yes

So frustrating that max installable RAM usually isn't listed in the specs


Thats because maximum capacity is determined by the CPU and the amount of slots available.

Just like a phone I bought years ago only listed a maximum micro-sd capacity of 32GiB even though the right formatting would allow 64 or 128. albeit being 'more wastefull' as the adress space didn't increase.


11th Gen+ Intel all support 64gb+ of RAM. If the laptop mobo supports 32gb dimms is another question all together.


Yep, used for development work so many VMs and such.


Bleeding edge tech usually starts off very expensive and tends to get cheaper as the manufacturing capacity increases, hardly news worthy.


Even if this was generally predictable, the exact timing may not be.

I was glad for the information.


Quality Control and payback periods are two situations where this becomes very obvious if you think about it, but is not obvious to most comsumers.

When you get the factory working well enough that you make a gross profit, you can scale up. The. You reduce waste over time by making little tweaks, and your margins go up a little. But then you need to pay off the equipment, and the research costs, and nobody wants to taper that off in a long tail because what’s the point in making a product you make no money on the entire time, or leaving yourself open for the competition introducing a new product you can’t compete with because you don’t have money saved up for R&D?

So you pay off your loans and refill your war chest and everything after that is gravy. If you’re the first in your industry to get there, you can turn the crews on the competition by lowering your prices first. Or don’t and get sued for price fixing. Dropping the price by 3% isn’t going to get anyone to take a chance on your product instead of someone else’s. You’ve got to do something more dramatic than that. And then you might find out you’re not the first, because everyone else happily matches or exceeds your price drop.


...and then, at least with RAM in my experience, after that it gets expensive again when no new computers have used that particular RAM for a few years.


It's newsworthy if you were on the fence waiting for a decent price drop.


Arent Intel Optane bleeding edge?


DDR5 makes no sense today at these prices. AMD is going to be in a tough position in the fall with DDR5-only support in their new chipsets/CPUs.

My next rig may be an 13th-gen Intel if DDR5 prices don't come down to DDR4 levels (given that DDR5 isn't any faster than DDR4.)


> DDR5 makes no sense today at these prices. AMD is going to be in a tough position in the fall with DDR5-only support in their new chipsets/CPUs.

Today is not the fall. Don't buy memory today for a processor that is not yet available. 'Next Generation' memory prices tend to go through this cycle, and it seems like AM5 will be out right around when the pricing gets pretty sane. If it released in Q1 2023 with DDR4 support, it would be a waste of die space. AMD has tended to keep sockets alive longer than Intel, so a transitional memory choice would be a larger burden.


That may be true in the short term, but comparable Intel processors actually cost a lot more because the motherboards are more expensive and power consumption over time will make the less power efficient Intel CPU much more expensive than AMD.


In terms of power consumption, I think that will heavily depend on usage patterns. AMD desktop processors currently have really bad idle power because of the chiplet-based design with an outdated 14nm IO die. So if a machine spends most of its time idle or mostly-idle, its long-term energy usage (and thus operating cost) could easily be higher with AMD than with an Intel processor that has excessive peak power draw but better idle power management.

(Are there any business-oriented AMD desktop PCs with Energy Star or similar certifications, or are those regulations too loose to matter here?)


Zen 4 will have a 6nm I/O die from TSMC, power usage ought to be lower.


That new IO die is by far the most important and interesting thing about their upcoming generation. Basic graphics capability as a universal feature, hopefully better idle power, DDR5 and PCIe gen5, while the compute chiplets will seemingly be getting fairly modest performance improvements.


A process shrink for the compute cores is always nice. 15% single core performance is always nice.


> AMD desktop processors currently have really bad idle power because of the chiplet-based design with an outdated 14nm IO die

Unless you use one of their fairly popular APUs, I assume? I got something something like 15 W on idle at the plug for an OEM AMD box. Is that a lot?


I was ignoring their desktop APUs, yes, because AMD seems to do quite a lot of that themselves. As repackaged laptop SoCs, they're decent at idle power, and they're fine if you only need mid-range performance and last year's features out of your desktop.


Of note is that pairing the desktop APU with one of the higher end desktop chipsets will still have the chipset (which is just another legacy IO die) have the excessive 10-15W idle consumption it would anyway.

I'm not sure if the Bx50 chipsets are any more efficient or if that needs the absolute low end of A series chipsets (that I believe are instead some ASMedia designed IO stuff).


> I got something something like 15 W on idle at the plug for an OEM AMD box. Is that a lot?

What kind of systems are those? I’ll need something like this soon.


I got a ThinkCentre M75t Gen 2 with a 4750G. Note that this was out of the box; when I added two more memory modules to get 48 GB of RAM, it rose to ~20 W on idle, but I kind of expected that.


I have a fairly high end gaming PC, and have hooked it up to measure the power used in a typical week over 4 weeks. I found it cost me about $12/month, which given my electricity bill has much larger administration and other fees, is minimal. I don't think people who don't run a datacenter care that much about the difference in power draw between an AMD or intel cpu.


On the contrary. We keep a keen eye on power draw for our colo racks because our data center provider charges a lot for any power overages. $12/mo sounds great at home. However, having +20 servers/blades/network gear drawing 2-3Amps each quickly approaches the 80% limit of our 2x 30A circuits. Regardless of CPU architecture, power draw is a big deal.


You’re literally reinforcing his point.


yeah. it is very entertaining/annoying to see carbon zealots bark at high-end PCs, as if they run under 100% CPU+GPU load 24x7


Maybe the zealots actually bring up this point as well... To me a high end computer looks wasteful because it was recently built and caused another computer to hit the landfill (usually).

It would be more efficient to replace a computer on a 5 or 10 year cycle rather than a 1 or 2 year cycle, but then you'd have a high end computer for a much lower percentage of the time.


they might, and like always, they'd be wrong. high end hardware doesn't go to landfills until it's dead

for example, 4790K + GTX 980 is a 8 year old computer. a high end PC from 6 years ago has a GTX 1080 in it, look up how much that costs still

you'd be crazy to throw something like that away instead of selling it for about 400$ or giving it away to friends or family

>It would be more efficient to replace a computer on a 5 or 10 year cycle

if you only use it to check emails and procrastinate on the internet, then yeah. but people who buy 2000$+ worth of hardware usually do something more productive or entertaining than that. and like I said, when they do buy it, whatever they had before is usually sold off to recoup part of the new hardware cost. that's how it worked with hardware for as long as I can remember

you know what does go into landfills? disposable garbage phones, designed to be resistant to repair and maintenance, and made unbearably slow by forced software updates when it's time for you to consume the new product. and I find it very ironic that these zealots we're talking about have a great affinity for the products from that particular company


i bet the cooling in a data center costs way more than the electricity used in the chips.


It depends.

Look up PUE for data centers. Because I work for Google, I recall, for example, that we build very efficient data centers:

https://www.google.com/about/datacenters/efficiency/

Ie. just 10% overhead.


Wouldn't efficiency affect the cost of cooling, given that the excess energy becomes heat?


Intel processors are currently actually very efficient on the workloads they're actually designed for.

The mid level 12th gen chips absolutely fly in gaming workloads, it's only if you max all cores for hours on end that the "Sell your grandmother for a MHz" approach Intel currently has makes a difference.


Intel Z690 motherboard price is awful. I paid $220 for a boring basic board. I suspect that X670 boards will have higher price than X570 thanks to DDR5 and PCIe Gen5.


It's the other way around. The reports are that the 2 X670 chipsets cost less than the X570 chipset. The motherboard might be more expensive due to additional capabilities and having to support ddr5 and pcie 5 but a basic amd zen4 motherboard with a single X670 chipset and limited addon ports and controllers might be less than a basic X570 motherboard.


For Z590 vs Z690, chipset price difference is just $1 according to Intel ark. I think price up is mainly due to other components.

I found good article. https://www.techpowerup.com/289728/intel-z690-motherboard-co...


Yeah it seems crazy to buy these. What is the benefit? I sometimes feel FOMO for the new Apple M1 Pro/Max/Ultra whatever and then realize "my MBP with an i9 and 16 GB of RAM literally has 6 hours of battery life" and then realize that if I'm ever sitting down for that long I should get up and touch some grass.


until you try an M1 product, you can't appreciate just how good it is - and how bad the previous i9 is.

also in 2022 six hours of battery life is not something I could life with. while you're right that you don't need to sit 6 hours in a single sitting, it could easily be that you don't have an option to charge.

transatlantic flights, for example. (ok, no grass there :-))


My wife has an intel macbook circa 2018? It gets uncomfortably hot just watching youtube videos, my M1 macbook air has never even gotten warm. Her battery life seems terrible, maybe it is 4-5 hours or so, but I go days without charging mine with casual use. If you are not diligent about recharging every night, the drastic increase of battery life makes a big difference.


6 hours of battery life would be a dream. With my workflow the Intel MBPs gave me around 2 hours, the M1 Pro doubles that to 4 hours which is perfect because by then I'm getting up, going for lunch, etc.

With the Intel MBP I had to carry around a huge jackery-style portable battery


I have a M1 and am frustrated by how consistently bad it is. All these people heap praise on it, while mine runs so slowly that it needs to be restarted at least once daily.

I took it into the mac store, they replaced the whole board/cpu unit. No change.

So either I have some other fluke, my software stack has something strange I can't debug (activity monitor only shows apple processes using all the CPU), or people just have different opinions of what fast means.

For context, I would say my 2018 Chromebook is generally faster for most things than my M1.

Rant over. I assume I have some dud computer it's just frustrating.


>activity monitor only shows apple processes using all the CPU

would that be 8GB M1 and swap process by any chance?


Maybe it's airline dependent, but I can't remember the last time I went on a long-haul flight that didn't have charging options. Some airlines are even starting to add USB-C to every seat so you don't need to futz around with an AC adapter (although given the proliferation of different charging capabilities, it might still be handy to use AC if you want to charge quickly)


Most long flights have AC outlets but half the time they are located under the leg of the person next to you. Virtually all the time they are a universal outlet that barely holds a US plug and definitely doesn’t hold a MacBook charger. It’s not a game changer but it’s nice to leave your adapter in your bag and not worry about charging.

Speaking of chargers, a 45W tiny GaN charger is enough for an M1 MacBook (maybe not for all use cases but I’ve yet to have an issue). I use this one which isn’t much bigger than a usb-a iPhone charger. https://us.anker.com/products/a2664


The extension cable [1] fixes both of these problems.

[1]: https://www.apple.com/shop/product/MK122LL/A/power-adapter-e...


Yes, but it takes up the same volume as a macbook. The UK plug in this kit (or most any knock-off or adapter) also solves the problem. Still prefer battery life and carrying nothing extra, however.

https://www.apple.com/uk/shop/product/MD837ZM/A/apple-world-...


The whole transatlantic thing is BS, you really want to be writing code all the way? Anyway, most flights going East over the Atlantic are overnight and you want to be sleeping.


On the last flight I was on, the AC service couldn't handle the power draw of my charger. Kept tripping the circuit immediately.


One nice benefit of laptops adopting USB-C charging is that they become much more tolerant of slower chargers, which in turn shouldn't trip a plane.


What airlines have you been on that have USB-C, and at a high enough wattage to charge a laptop?


Air Canada's chargers are 60w IIRC. But MBPs are pretty accepting of various voltages- if you're desperate, you can even trickle charge one with an old school USB-A port


Air France (A220) within Europe, 60W USB-C port


I use an i10 laptop with 16GB of RAM and I also use a 2012 macbook pro. Both running Linux. I am productive on both devices, so I would be very surprised if an M1 product would make me feel the need to switch away from an i10 (or i9) device. I do agree the M1 is quite neat though, just not a big deal for users that run tasks lighter than compiling a browser engine.


I didn't even realize there were i10s


There aren't. Maybe they meant 10th gen?


What is an i10?


What is an i10?


I bought an intel iMac several months ago when they suddenly went missing from the Apple Store with no 27” replacement. It has 8GB of RAM, yet is somehow faster than my M1 Max (16GB) at compiling my current Xcode project (a Unity game). The battery life is good, but if you’re pushing it hard it isn’t that great unfortunately. For web dev, sure it can go most of the day. Also it’s extremely heavy and thick. I wish it was thinner, it doesn’t fit in the bags I have for my older 15” MBPs. It has an HDMI port, which I will never ever use, one less USB C port (which really sucks ass), and with all the dumb thickness they didn’t include the one “retro” port I would actually use daily, USB type A. It doesn’t get very hot at full load (CPU mining for days/weeks) so I’m not sure why it’s so thick. The old Intel versions should have been in the mega-thick chassis, but the M1 doesn’t really benefit.

My favorite part of the new 16” MBP is actually the screen. It’s so incredible I struggle to describe it. When moving the mouse it feels more instantaneous, everything is smoother. So don’t FOMO on the CPU, it’s good but not the killer feature.


Yeah the screens and speakers are great; a real step-up from my previous 15”. The HDR actually works unlike most standalone screens that have good specs on paper but actually don’t deliver. The webcam is much nicer (although nothing special, just that the old ones were obsolete). The mic is also good, as it always was.


Yeah the 16" (last gen Intel cpus) is the one I have! That's what I'm saying, it's fine and while the battery life of the M1 seems absolute stellar, I'm def not going to be buying computers for the next 3 or 4 years just because I have like 3 unused MBPs that all are literally in great condition. Too much churn in hardware tech.

EDIT: your music link sharing site is neat, be my friend


> It doesn’t get very hot at full load (CPU mining for days/weeks) so I’m not sure why it’s so thick.

Perhaps that’s why…?


No way to prove it without some serious hardware mods! But given how the M1 Air performs without a fan or thick chassis, I have my own guess…


I noticed that I hadn't charged my Macbook Pro for almost a week once. I wasn't using it much per day but it was being used.

Also I don't really care about the battery, the amazing thing about the M1 chips is that they just don't get hot.


I too don't care one iota about the battery life, it's plugged in at all times anyway. But it's blazing fast and the fan never makes a sound, assuming there even is a fan. It really was a game changer for me in mobile development where I was used to fans continuously whining for 8h, and the machine still being slow probably due to thermal throttling.


This doesn’t get mentioned enough. My 2020 intel MacBook Pro is a solid machine that gets the job done.

But the damn thing burns my lap.


Why? People can still buy the 4xxx and 5xxx ones. Anybody who is upgrading to the latest and greatest can afford the extra 50-100 bucks for ddr5... who cares.


180$ for 32Gb ddr5 is cheaper than what the same ddr4 kits have sold out for the latest years.


I got 32 gigs of DDR4 2 months ago for $120.


And I paid 110 euros 2 years ago for 16GBs of simple Corsairs DDR4 3200mhz (cl18 iirc) and they were among the cheapest, 32GB was around 200 euros.

My point being that ram prices have been high for the latest years before prices started re adjusting in the last 12 months and that DDR5 currently aren't that expensive.


For a good speed, or cheap and slow sticks?


3200mhz cl 16. they aren't the absolute fastest, but 8 have a zen 2 CPU, so faster wouldn't have made a difference.


I'm sort of surprised Intel is still an option. since they never revamped the fundamentals of their hyperthreading technology their spectre/meltdown/etc mitigations still send every new chip they make to the barber for a 15-30% haircut in performance. last hn metrics focused exclusively on comparing their performance against...another Intel chip.

just because players like f5 chose to ignore the patches and their Linux kernel patch shipped disabled by default doesn't mean they're out of the woods


> since they never revamped the fundamentals of their hyperthreading technology their spectre/meltdown/etc mitigations still send every new chip they make to the barber for a 15-30% haircut in performance.

Most of this is blatantly egregiously false.


Literally none of the new Intel processors are vulnerable meltdown


Basically all out of order superscalar processors are vulnerable to Spectre. Intel have already mitigated meltdown in hardware, remember that it was disclosed 4 years ago now...


Is AMD _not_ paying the same price in performance because of spectre/meltdown?


AMD is paying a smaller price; they still had some mitigations to do, but they didn't let speculative memory accesses bypass access control, IIRC, so there's less speculation to turn off.


> AMD is paying a smaller price; they still had some mitigations to do, but they didn't let speculative memory accesses bypass access control, IIRC, so there's less speculation to turn off.

And this is also blatantly false. Zen 3 CPUs have a bigger overhead mitigating Spectre than Zen 2 and modern Intel uArchs.

Check Phoronix for performance reviews.


AMD isn't vulnerable to Meltdown, but they do have their own Meltdown-style vulnerability (actually discovered by the same team) that remains unpatched at a CPU level.

https://www.usenix.org/system/files/sec22summer_lipp.pdf

The first two attack primitives have been patched, sorta, by making those measurements require root access (and making them inaccessible to VM guests). So they're not fixed, but in a properly configured system you won't be able to get at the primitives needed for the attack.

The third measurement, however, remains unpatched. It can be mitigated by turning on KPTI, but AMD doesn't change the default because of the performance consequences. KPTI works by constantly flushing the TLB cache every time you hop to kernel-level code (ie every time you make a syscall), thus creating a "cache barrier" between the two privilege levels - the attack then cannot leak something that is on the higher privilege level, because it's never in cache. This is the same way Intel mitigated meltdown, and it has a large performance impact - every time you talk to the network, or disk, or anything else, you have to flush your TLB cache.

AMD's official advice is "this doesn't work if you enforce address-space boundaries, follow all security best-practices", the subtext being you need to enable KPTI, but they won't recommend the option be set by default, and they've done their level best to shove it under the rug. Which is understandable, for some reason it's entered the public consciousness that "AMD doesn't have vulnerabilities" and it would be very bad for their image if that notion were dispelled.

https://www.amd.com/en/corporate/product-security/bulletin/a...

Briefly discussed and measured in Phoronix here: https://www.phoronix.com/scan.php?page=article&item=if-amd-k...

So yeah, AMD doesn't pay the same price for mitigating side-channel attacks... because they don't mitigate them.

As phoronix discusses, this gets back to the question of whether everyone needs to be mitigating these, we're not all cloud hyperscalers and there's never been a consumer attack using smeltdown-style attacks (it's far easier to just drop some malware and steal the password vault rather than to randomly scan through memory looking for it - you don't have to be the fastest animal in the herd... just not the slowest) but AMD has kinda chosen insecure-by-default here. Sure, that's faster, and you can turn off mitigations on Intel too. Should you?

The odds of an exploit are slight, but that didn't prevent people from taking heavy precautions when this category of exploits was (re-)discovered.


Doesn't DDR5 have a much higher clock ceiling? That should surely translates to much higher throughput?!


Yea, but those kits are prohibitively expensive if I understand correctly.


Very few workloads are memory bandwidth bound. For workloads that are yes. You’ll get bigger gains by using all the memory channels and engaging all the memory controllers vs upping the clock.


yes, but throughput is rarely a bottleneck for consumer grade cpus. Maybe on the 16 core ryzens you could get some benefit for select workloads


Does any brand at all sell ECC UDIMM DDR5 modules? I want to use it for an Alder Lake workstation, but can’t seem to find any except for Dell OEM memory. Is there none on the “open” market?


I have seen somewhere in an online shop, but their price was double compared to ECC UDIMM DDR4.

It is better to either wait for the price of ECC DDR5 UDIMM to fall or to buy now a workstation motherboard like the Gigabyte MW34-SP0, which supports the cheaper ECC DDR4 UDIMM.

For a workstation, it is usually better to have at least 64 GB DRAM, than to have less memory, even if it is faster. For example, compiling a large software project in a RAM disk may save much more time than having DDR5 instead of DDR4 would save, so the price difference should be less than 50%, preferably much less, for DDR5 to become attractive.


I was under the impression that all DDR5 modules have ECC in them, but only "ECC" modules reported errors to the OS. Wouldn't normal DDR5 be sufficient for your workstation?


So what happened is the following:

When JEDEC was standardizing DDR5 they mandated on-die ECC which checks for errors that happen while the data rests on RAM. This was done to increase reliability because of high memory density. This is good and every DDR5 module has it.

However there’s another ECC: the one where errors are checked each time data is transferred between CPU and RAM. This requires the support of the CPU, the RAM, and the mainboard. This is important for data integrity and every computer that runs for multi-day periods must have it. I’ve found by reading papers about it that 1 bit error per 4 GB RAM per 3 days is guaranteed to happen.

If RAM is marked DDR5 it has on-die ECC. If RAM is marked DDR5 ECC it has on-die and transfer ECC. That is what I want and it is not optional at all in my opinion.


> I’ve found by reading papers about it that 1 bit error per 4 GB RAM per 3 days is guaranteed to happen.

This estimate is several orders of magnitude too high. We’d be seeing weird bit flips and file corruption all over the place if this was even close to being true.

I have ECC systems that will report the number of errors detected and corrected. Still waiting to see any errors on my main machine with 64GB of RAM.

At server farm scale, the majority of memory errors come from a very small number of faulty memory sticks. It’s not an even distribution of errors across all memory.


>I’ve found by reading papers about it that 1 bit error per 4 GB RAM per 3

So for my 64GB of RAM I get 2 bytes worth of errors / day. And a few hundred for 2 or 3 months since I didn't restart my PC.


This could easily be true, and easily have zero impact on anything you are doing.

Memory can be assigned to all sorts of things so the impact of a single bit change will depend entirely on what that memory is being used for at that moment.

For example, I load a program into memory, but I'm not running all of the code in that program all the time. Let's say the program includes code for exporting a list to excel, but that's a feature I never use.

Or it has a procedure which uses a local variable as a loop counter. Outside the confines of that loop the variable is meaningless.

Plus its unlikely you are even nominally using all 64g at the same time. The errors might all be happening in unused ram.

Even if it flips actual data, it may end up being no more than a typo in a document.

So yes, a single bit in 4gb might flip from time to time. But the probability of the flip being "meaningful" is less than 1.


IMHO a typo in a document is a serious error that we should put substantial effort into preventing.

There are even worse potential outcomes:

* A single bit error in a stream of compressed or encrypted data stream that renders the stream unreadable.

* A permissions check being yes instead of no.

* Backups being silently corrupted


While a typo can be significant, statistically speaking it almost certainly isn't. Equally I'm not saying that all bit flips are inconsequential. I'm saying that most bit flips likely go either completely undetected, or have meaningless consequences.

Saying that say 1 bit per 4Gb will flip every day is a long way from saying that a _significant_ (or even detectable) bit flip will happen every day (per 4Gb). From a simple feel for "what memory is used for", coupled with "how much of the ram is actually used" makes me suspect that it's a very low probability of it being an issue.

Which makes the parent comment, about 1600 bits of data being flipped since the machine last rebooted [1] being both "true" and likely "meaningless".

[1] I'm not sure what rebooting has to do with anything though.


I get what you're saying but on the other hand isn't bit-flipping that results in typos or crashes something that is so beneath 2022?

Like isn't this the kind of thing that Turing and Zuse had to deal with?

And if it isn't, when is it a problem that we will finally and absolutely excise?

Is it more of a 2030 thing? 2040?


With more and more miniaturization such stochastical behaviour becomes more and more prevalent. Same with higher and higher speed interconnects. Forward error correction is standard on e.g. NAND flash, VRAM for GPUs and being adopted by faster Ethernet standards. It's mostly just the consumer dram market that doesn't have robust error correction.


Or it can have catastrophic effects.


> I’ve found by reading papers about it that 1 bit error per 4 GB RAM per 3 days is guaranteed to happen.

That is very, very dubious indeed. I can't believe there's no ECC between ram and cpu, it's not credible for anything but a cheap home machine. Any sources?


It sounds wrong because it is wrong. In general, error rates in the modules scale with the physical volume of the chips, not the amount of memory they contain (because the primary root cause is cosmic rays or radioactive contamination inside the chips). Error rates on the bus mostly depend on external circumstances, and scale with transfer amounts, not memory amounts.

Most paper characterizing errors are very old (so they used memory chips with large feature sizes), and they scaled their error rates by memory amount (because they didn't know better). This results in massively overinflated error bit rates. It's possible to prove that this is wrong very easily by just writing a program that allocates a large pool of ram, and constantly reads it and writes back to it over a period of time, and then check results after a while. I once did this for an argument on the internet about this -- on my old Intel Sandy bridge system with 16GB non-ecc ram, I allocated a pool of 8GB, wrote ascending integers from 0 to 2^31 to it, and then iterated over it, just reading it in, checking if it's still correct and terminating if it's not, and writing it back out to same address. This program ran for a week with no errors before I had to end it because I needed the machine. 1 bit error per 1.333GB*days is completely not credible.


I don't completely understand your conversation. Maybe the error rate is inflated, but as for:

> . I can't believe there's no ECC between ram and cpu, it's not credible for anything but a cheap home machine.

it is absolutely not wrong: there is no ECC between CPU and RAM for most consumer and desktop computers, and even tons of embedded systems, and that's basically the only bus lacking even basic integrity checks, so there really should be some ECC also there, nearly everywhere.


Isn't my "cheap home machine" equivalent to your "most consumer and desktop computers"?

As for embedded, I'd be surprised if there's ECC at all, never mind on the bus (with the exception of high reliability stuff like spacecraft and factory controllers. Clocks, cookers, microwaves, etc don't count).


> Isn't my "cheap home machine" equivalent to your "most consumer and desktop computers"?

Kind of, that's why I was I was saying it is not wrong. IMO though, there should be ECC on everything but the really cheapest computers and if the application even allows it, meaning ECC for business desktop computers, and home computers marketed as using state of the art tech, like Core processors. Maybe you can avoid it in game consoles, or computers marketed for 99% gaming (except they risk being expensive and using good quality processors, so what's the point), but that should be it.


I believe that there might have been a typo in what the previous poster has written.

IIRC, the error rate is more like 1 bit error per 4 GB RAM per 30 days (in the relatively recent studies made by Google for their servers).

Nevertheless, that is still not negligible. For workstations and servers with 64 GB DRAM or more, the errors can become frequent. Also, at high altitude the error rate is higher.

Moreover, besides the errors due to radiation, there are errors caused by electrical noise, which can become more frequent in old computers with cheap DIMM sockets, whose contacts have oxidized.

(I actually had this problem in some HP laptops, which had been stored for a long time without being used; being cold made the SODIMM sockets more vulnerable to humidity; after identifying the cause and scrubbing the sockets, the frequent memory errors disappeared.)


The internal ECC is useful but a separate feature. Traditional ECC, achieved by adding extra storage, is still available and will also protect the data as it's sent across the bus. Sadly it now requires 25% extra chips instead of 12.5% extra chips, because of DIMMs being split into two subchannels. Also sadly there's no way to generate a bus-level checksum on the fly.

I don't know if internal ECC errors ever get reported.


Less crashed and more like normalized.


Here is a months old LTT video predicting this cycle: https://youtu.be/aJEq7H4Wf6U


This happens with every new DDR generation, there's not much to predict.


Yes, that is the point explicitly made in the video.


And here I am still rocking DDR3 on third gen Intel hardware with no complaints!


I was this way until late last year, I had an overclocked i5-3570k with 32gb of DDR3. But I bit the bullet and went to a 5900x, I didn’t realize how cpu limited I was at the time though on games and programs. But it lasted for 7 years and I put off upgrading until I could get roughly 2x single core performance.


I am still on a i7-3770 & 16GB of ram on my desktop and until about a year ago I still though it was okay, but I have really been noticing it's limitations now with my M1 mac. Could possibly be do to meltdown/spectre patches in the kernel as well, looking forward to Zen 4!


I've considered upgrading each year for the past 3 or 4 years but each time I check prices I can never justify the cost. Given that 90% of my computer use is now on a work issued Macbook pro, I can't really justify spending 2-3k on a desktop upgrade if my current setup works fine for coding, very occasional gaming and multimedia consumption.


Pre-Rowhammer DRAM is hard to find now.


Thought the same on 4th gen, but figured the box is becoming a bit old and upgraded out of curiosity to 12th.

Holy crap the thing flies and the cooler barely spins on idle, if at all.


Is there any speculation as to what is causing the drop?


IIRC a lot of the initial DDR5 shortage was caused by short supply of the PMICs that are now included on each DIMM (where previous generations had the voltage regulation on the motherboard). I haven't heard anything recently about that issue.

AMD has announced that their new generation of desktop processors launching in the fall will use DDR5 exclusively, while Intel's current desktop processors can use DDR5 or DDR4 depending on the motherboard. So memory manufacturers should be preparing for a sharp increase in DDR5 demand instead of it just being a premium option.


The graphics card bubble seems to be reducing, chip shortages affecting other components may be reducing the need to upgrade RAM¹ both for personal and commercial users, and a general drop in demand because of recent cost of living increases² in many places.

Furthermore RAM prices generally may have been kept higher in the last two years due to demand from companies and individuals buying or upgrading kit for home working. Even without the migration of many workers back to offices that extra demand would be rapidly cooling off anyway, as those upgrades and new purchases are³ not part of a regular pattern, so this may be an expected correction.

On top of that the tech is no longer as new: production costs will be dropping as processes are refined and output numbers increase. Though I suspect the current drop is more a demand-side thing for several reasons including the above.

Also: early pioneers have been scalped already, the next tranche of buyers may not bite until there is this sort of price drop.

[1] why bother if other parts of your system won't keep up well enough for you to see the benefit?

[2] fuel, food, interest rates rising in some countries affects those with a mortgage, et cacas: fancy tech is a luxury, people may put off such purchases until the costs of essentials settle a bit

[3] hopefully!


AMD is going to be supporting only DDR5 for their next series, so supply is going to go way up.


Demand is going to go way up. Supply should (hopefully) follow.


The standard explanation for almost everything right now is everyone in the supply chain ordered more than they needed, it caused a supply crunch, but now warehouses are full, and suppliers either caught up in production, or people don't want to be stuck with too much old product in their warehouses.


DDR5 is new and there isn't much demand yet so presumably the manufacturers are ramping supply gradually.

I also saw a rumor that the power management IC used by DDR5 DIMMs was in short supply.


Probably everyone anticipating a recession and holding off on ordering anymore until they get a clearer idea of whats going happen.


I'd guess supply outstripping demand - but am open to suggestions from those understanding what the stars are currently doing.


Interesting to see a 24gb option. Was thinking for my next build 32 seems a little thin but 64 too much. 48 would be a neat compromise


Why not just get the 64GB? I doubt the difference between the 48GB and 64GB in price would be that big.


>I doubt the difference between the 48GB and 64GB in price would be that big.

If I can, sure. Glad for the middle ground option though. e.g. I'd rather have 48gb slightly faster kit than 64gb for example.

Other thing is that I've got a lot offloaded onto a home server already so less need for GBs on desktop


DDR5 supports 24Gbit chips, though they are not common yet.


I'm running 48 (2x16 , 2x8, all Corsair Vengeance C16). Each pair of DIMMs shares a channel. Works great.


Doesn't it halve the throughput to mix channel sizes like that?


If you install it right, you're not mixing channels. 24 on one and 24 on the other.


Interesting, this is the coolest thing I've learned this week! It looks like there are even situations 3 DIMMs can meet the criteria for dual channel operation https://www.intel.com/content/www/us/en/support/articles/000...


As pointed out, each channel is 24GB (16+8), so it's running dual channel happy as a clam.


32 is thin only if you have highly specialized workload. Its enough for all games and even pretty hardcore software development.


I hope pricing keep going down for this Fall. Whether it be AMD or Intel for me this upcoming generation it's looking like the high end options will remain very memory starved when it comes to multi-core workloads due to the core counts and DDR5 helps with that.


Although currently in well-known stores like Amazon, over 3,000 different kits of DDR4 RAM are on sale and only 213 of DDR5 memory.

Of which, what fraction are counterfeit, floor sweepings, rejects, or stolen? That's too many resellers.


Is this the impact of Shanghai being clpsed for 2 months? No components out from Shanghai. No stuff in. So lots of companies not investing in new computers - because they have cash flow problems (cant finish goods to sell)?


20% is hardly a crash but I expect retail prices be acceptable for Zen 5, say q3 2023.

Good for this year's laptops and whoever decides to go with Zen 4


I think these hardware manufacturers figured out that they can just keep prices high. Memories and SSD manufacturers especially.


From that picture, what are VPP, BL and DFE?



Well that burst length looks like it's going to be good for bandwidth.

Now, about DRAM latency... perhaps DDR6 can have an I WANTZ IT NAO signal.


Well, that explains why the 32GB RAM laptops I've been shopping for are suddenly cheaper. Time to buy I guess.


Is it normal to call a price drop a "crash", even when not talking about stocks? The title and phrases like "DDR4 memory has also suffered price drops" make it sound as if something terrible happened. Or is it just me?


It's clickbait. "Crash" and "suffer" get more clicks than "lower". Yes, human psychology is that shallow and manipulable.


I wouldn't call it a crash unless there was a severe enough oversupply that some suppliers were in financial trouble. The DDR5 situation is far from an oversupply; it's just a severe shortage easing as the market starts to mature and achieve a semi-reasonable equilibrium.


And that is a problem. I am even thinking if I should flag it.


Agree. This is just a price drop from the expensive initial price.


It's hardly even a sale.


Bad title. Dropping from extremely expensive to very expensive is NOT crashing. It is still a lot more expensive than DDR4 with no matching performance gains, so it is a false statement. Luna was a crash, not a 20% price reduction.


A) commentary on price trends are always hyperbolic

B) outside of crypto assets, 20% downwards in a month is a crash in every asset class


> outside of crypto assets, 20% downwards in a month is a crash in every asset class

When a product in demand moves from prototype into mass production, the price drops rapidly. That isn't a crash. The expected price is to get close to DDR4.


> commentary on price trends are always hyperbolic

for example, many financial publications will write the exact same thing for a 2% intraday price movement

there is nothing to explain or rationalize or correct, its hyperbole

I think its interesting, to know about prototyping to mass production, but is or isn't a crash? okay. shrug. its a waste of time to split hairs over that, its hyperbole.


> its a waste of time to split hairs over that, its hyperbole.

It's fine that you don't care about the difference, but for people that aren't particularly in touch with the computer components market it's very useful to point out the difference between a stable price plummeting below the norm and a brand new price plummeting toward the norm.


Yes, good context

In the context of semiconductors, isnt everything inflated and would plummet toward a historical norm with little regard to newness? for now


Everything's a bit inflated right now but we're most of the way back to normal for consumer computing parts.


B - Memory is a 'class of product', DDRx is just the model.

A few people will buy the 2022 Ford-Model-X for a premium at launch, but once they've bought it, Ford are just selling to the people with a 2010 Ford who just want to upgrade to what's in the new catalogue.

I'm sure some of those upgraders will pick up a bargain on that onsold 2020 stock - but Ford aren't devoting resources to keep making those old models.


It is a commodity though.

If oranges drop from $5/kg to $4/kg I am not calling it a crash.


the futures traders will


If anyone was trading futures on DDR5 the price would be even lower than this level, because that's how the price always works on new types of memory.


RAM is an asset class?


'Semiconductors' is a sector and it is probably appropriate to consider them as a commodity but no certainly not it's own asset class. The asset class you are looking for is 'commodities', old sport.


This is just the financialization of the American mind


Calls on the American mind, please


It's certainly a commodity right? (at least the RAM chips that get put on modules), and commodities are certainly an asset class?


Physical asset


It’s definitely not an asset. It’s near practically a consumable. It will degrade with use, and over time. It will not maintain it’s current value, let alone increase in value.


All physical assets degrade with use and only a very few increase in value in real terms.


Counterexample: guitars improve with use and age, not degrade.


Every part that the player or strings regularly touch will wear with use and eventually need to be replaced or repaired.


Strings are not the asset, they are a consumable. Many store guitars long term without strings to avoid unnecessary additional neck tension.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: