Hacker News new | past | comments | ask | show | jobs | submit login
DRAM Prices in ‘Freefall’ (eetimes.com)
107 points by baybal2 13 days ago | hide | past | web | favorite | 157 comments





We're still not at 2016 levels. I don't know an equivalent website in English, so even if you don't understand Dutch, try the Tweakers Pricewatch for RAM memory[1]: click some products and scroll down to the price chart. The prices in 2016 were cheaper than now, for example for this set of 16GB Corsair modules[2].

Prices are coming down to previous levels, but we're not even there yet. To say they are in freefall is like saying sofa prices are in freefall for being $900 after they priced it $1500 after initially offering it for $600. Except that sofas don't get cheaper every year and electronics typically do, so really, RAM prices are still sky high.

[1] https://tweakers.net/categorie/545/geheugen-intern/producten...

[2] https://ic.tweakimg.net/ext/i/?ProduktID=458074&deliveryCoun... (PNG)


If you haul a sofa to the top of a cliff and then throw it off, it’s in free fall even if it started at the base.

It’s great to clarify that prices are still higher than previously, though. I wouldn’t have known that otherwise.


Kudos for relevantly extending the sofa analogy.

Honestly, that’s really the only reason I made that reply.

Judging from [2], a ~40% drop would equate to the price of 2016, so a 30% drop is pretty close.

It is in FreeFall because the price drop have no sign of slowing. I am just surprised that DRAM still haven't found a price floor somewhere along, it seems Server consumption of DRAM aren't as high as the DRAM in Mobile Phone and PC combined.

Both DRAM and NAND has been the major cost component in Servers, hopefully we see some price reduction in Cloud Computing coming soon.


here's an english site (EU sellers). select a product, open price chart to the top right, select all.

just from sampling a handful it looks like some have dropped below their 2016 levels, others have not.

https://geizhals.eu/?cat=ramddr3&xf=1454_8192~5828_DDR3


DDR3 - ram that is no longer used in anything currently manufactured, for past 2?3 years? Try DDR4 and we are nowhere near 2016 levels

I keep adding RAM and i keep running into 95%+ usage. Main consumer: Chrome, just with a few dozen tabs open.

Followed by Android Studio (3GB for a medium project), IntelliJ idea (another 3GB for a smallish project + 15% cpu constantly)

If prices keep going down i'll just put 128GB and call it a day.

Slack takes more than half a Gig with only 5 teams. It's a chat app, what the F?


The past couple of days I took a deeper look into the RAM usage of Android Studio as I had my fair share of problems with it. I'm not sure what the 3GB you mentioned include, but I made the following observations:

- Android Studio itself: default heap size of 1280MB[1] plus ~400MB of metaspace

- Gradle: That's the part which surprised me the most:

- Most projects probably have a custom heap size set for Gradle. Turns out: If you're using Kotlin the same amount of heap size is also used for the Kotlin compile daemon. So if you configure -Xmx2G you'll end up with 2GB for the Gradle daemon and 2GB for the Kotlin compile daemon. The heap size for the Kotlin compile daemon can be separately configured by setting `-Dkotlin.daemon.jvm.options=-Xmx<xxx>`[2].

- Gradle 5.0 is way more memory efficient than the versions before and they even drastically lowered the default heap sizes[3].

- If you still have a custom heap size in your `dexOptions` set: that's not recommended anymore[4].

[1]: https://developer.android.com/studio/intro/studio-config

[2]: https://discuss.kotlinlang.org/t/solved-disable-kotlin-compi...

[3]: https://github.com/gradle/gradle/issues/6216

[4]: https://stackoverflow.com/a/37230589/4779904


A concept a High Performance Computing professor told me once: A program which is memory intensive must be slower then one which needs little memory -- just because load/write operations from the CPU take time.

That's one of the reasons why some bloated contemporary desktop software (i.e. Slack) feels as fast as its historic counterpart did in 1995 (i.e. our MSN/ICQ clients back in these days).


That's a nice sound bite, but isn't really true. There are many cases where we increase memory usage for faster software.

Side note, lessons from HPC aren't really that applicable for chat programs.


>we increase memory usage for faster software.

We increase memory usage for faster software development.


No, lots of algorithms and data structures use more space to provide more speed. It's not simple laziness.

I don’t think that’s what Slack is doing here though :)

I’m totally with you for things that are actually doing serious numeric/algorithmic work. Matrix inversion runs a helluva lot faster when the whole matrix fits in RAM. And there are undoubtedly lots of places where you can use different algorithms for the same problem that can trade off memory usage and computational complexity.


It does happen obviously: https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff

But yes, running another browser + javascript interpreter on a machine that already has a browser + javascript interpreter running isn't what that article is talking about.


Constraint solvers comes to mind. It's pretty much the most memory hungry application I can think of, though some of the VLSA EDA tools need triple digit GBs of memory to do anything.

You can also increase memory usage for faster software, as is the case with memoization and lookup tables.

Reading from RAM is certainly faster than disk, but reading less is faster than reading either from RAM or from disk.

Chrome is a big application that tries to be everything for all people. If Desktop performance mattered to companies, folks would strip out the browser and use native components. (This is literally React Native on mobile, even if programmers are still mostly writing in JavaScript instead of C++ or Objective C or Swift.)


> Reading from RAM is certainly faster than disk, but reading less is faster than reading either from RAM or from disk.

This is just not true in a lot of ways.

For example, you can use memory to store the same data in multiple ways, and then you get blistering fast access, whereas if you only store it in a single way, you may get a range of read performance between "as good" to "really really bad".

I understand why what you said sounds true, but it's not that simple at all.


Yes, you're right. In hindsight, I wish I could delete that first sentence -- there are obvious counter-examples (ray tracing / movie rendering, search engines, memoization, dynamic programming).

Conversely cache misses are painful and largely haven't gotten faster over the years.

Exactly. And the reason why programs that use a lot of memory might feel slower than one with a little is that cache miss rate.

Adding memory often makes an entire system faster (again, because you can cache things in the faster RAM rather than paging out to disk). But for two comparable programs, I could see the argument that the lower-memory footprint program would be faster. (For comparable programs).


If the algorithms are otherwise identical, sure.

Often the memory being used is doing things like pre-loading information from disk or the network before it's needed, or caching results for reuse.

Or it's storing the information in a faster but less dense data structure - kind of like denormalizing a database for performance reasons.


Yup, as DDR clock speed increased so have CAS timings.

It's a serious problem in compiler optimization. In the ideal world, a program with loop unrolling and vectorization should always be faster, but in practice, the bigger binary size induces cache misses and may produce a slower problem. Time goes by, and compilers keep adding increasingly complex techniques based on space/time tradeoffs, making the problem trickier.

Historically, under some circumstances, a problem compiled with -Os, may paradoxically be faster than -O2, and -O2 may be faster than -O3. I think the situation is getting better now, and the optimizing level mostly works as advertised. But there's still some problems in automatic loop vectorization.


> In the ideal world, a program with loop unrolling and vectorization should always be faster, but in practice, the bigger binary size induces cache misses and may produce a slower problem.

The solution is to use abstract, arbitrary vectors in the ISA instead of packed SIMD and unrolled loops. RISC-V gets this right with their V extension.


There are many different kinds of loop unrolling, and RISC-V's V extension is not the only one. Itanium had the same issue: the loop unrolling assist in the instruction set was not viewed by many compiler people as being sufficient.


What's the paradox? The compiler simply can't tell.

Faster, or perhaps more features.

I just don't think this is true.

A very common tradeoff we make is to cache the results of time consuming calculations in memory, or even cache raw data in memory to avoid database calls. Both of these techniques increase memory usage, but improve performance.


That depends on how expensive those calculations are relative to memory access. CPUs are very fast at doing a ton of math but very slow at reading or writing to random memory.

Yes, of course, but my point wasn't that if you cache all results your software will be faster. My point is just that it is possible to write code that consumes more memory, and runs faster under some circumstances, which refutes the ops claim that all software that uses more memory is slower.

A program which is memory intensive must be slower then one which needs little memory -- just because load/write operations from the CPU take time.

Not if the alternative to reading/writing from RAM is reading writing from disk. One piece of data processing software I used to use had its roots in the early 80s and was very memory efficient. It achieved this by rewriting the input data to disk in a clever way and the reading and writing just the parts it need. I could process gigs of data using only a couple of hundred MBs of RAM. A different, more modern, program I also used would slurp in as much data as it could fit in RAM and then just go from there. It would happily use 30 of the 32 GB or RAM I had every time it ran, but it was also 1-2 orders of magnitude faster.


ICQ was much faster on a 150mhz pentium than slack is on a modern computer today.

If they're algorithmically the same, that's true, otherwise not. Caching computations takes space, but can drastically speed up runtime performance.

> A concept a High Performance Computing professor told me once: A program which is memory intensive must be slower then one which needs little memory -- just because load/write operations from the CPU take time.

Hi, there's this guy named 'cache' that wants to speak with you.


> Main [RAM] consumer: Chrome, just with a few dozen tabs open....Slack takes more than half a Gig with only 5 teams. It's a chat app, what the F?

Because it's an Electron app, i.e. another copy of Chrome.

Actually for all the money they've raised I can't understand why the Slack apps are so terrible. I use Mac and iOS versions, and they still can't show useful activity badges; video doesn't work on ios...what are the spending their billions on?


We've opened a lot of tickets about the " useful activity badges" but they seem to think it's a feature, not a bug. (I assume to keep remote devs happy so managers can't drill down to the minute of time.) I know for sure that their underlying tool fo determine away time is very accurate.

If you have the RAM, wouldn't you rather the software use it? Is there some benefit to having memory sitting around empty and idle?

As long as the software doesn't require all that memory, but can work OK with less and then scale up to use more if you've got it, I guess I don't see what the problem is.


How often do your computer exclusively runs Chrome? How easily can the OS take the memory back from Chrome to run something else (that one is easy, it can not, except on the last case scenario where it kills process).

> How easily can the OS take the memory back from Chrome to run something else (that one is easy, it can not, except on the last case scenario where it kills process).

It actually can. It's called swapping. An amazing technology from the 90s.


That's not what GP meant, the analogous situation is loading GBs of data into a database, resulting in a GBs sized file on disk. If you truncate the data, it's gone, but the database won't actually shrink the file on disk unless you vacuum too. Chrome might not be using all the memory it has claimed, but it's not necessarily releasing all that it can back to the system either.

> but it's not necessarily releasing all that it can back to the system either.

That's exactly the point of swapping. Memory which was not used in a (relatively) long time is taken away forcefully from the process if there is memory pressure on the system.

So it doesn't matter if Chrome releases it or not, it will be taken away if it just sits there.


> is taken away forcefully

To disk. Waiting for the memory a badly behaved app overzealously ate up without using to finish flushing to disk before being able to use it is worse than simply having the memory readily available. This is why I always disable swap, opting to let bad processes die, sometimes even pulling the trigger myself.


If you ever hit actual swapping, then your system performance is already way past the point of no return.

In modern systems, paging is okay. But not swapping.


In even more modern systems, with NVMe NAND storage, even swapping is ok.

> from the 90s.

From the 60s. Atlas computer had virtual memory. I originally thought it was the System360.

https://www.computerhistory.org/timeline/1962/


That's what the virtual memory system does, provided you're not running with swap disabled. A page fault isn't hard for the kernel to handle, it's just a severe performance hit.

Does it gracefully & rapidly clean house if another app needs some? Does the extra usage benefit you in any way at all, to make up for the disk cache it gets used as otherwise?

> Is there some benefit to having memory sitting around empty and idle?

Yes, modern operating systems use free RAM as cache for the HDD. If you have an I/O intensive process, having a large amount of RAM is really helpful.


If you have any kind of workload with IO patterns that are not embarassingly parallel you want your files to be in the page caches, i.e. RAM, to minimize the access latencies and thus the walltime of your workload.

For that to work applications shouldn't be gobbling up all the ram, instead they should leave it to the OS to cache data.

Sure, NVMe drives help, but loading things from RAM is snappier still.

Also, those huge memory footprints themselves can also cause issues. E.g. they can increase GC pauses in GCed languages or decrease cache hit rates when allocations are scattered all over the place.


I played minecraft last year again and it did exactly what you said. I was constantly running out of my paltry 8GB of RAM on a linux system that wasn't running anything except minecraft. If I got unlucky it went up from 98.5% RAM utilization to 100% and then the whole system locked up. It probably was the worst gaming experience in my whole life.

So should I write software that just leaks and not worry about it?

That's the Electron way isn't it?

Cut corners and offload the trade-offs onto your end users.


Slack is running a chrome instance - look at it in detail using Sysinternal's Process Explorer. Skype now does the same.

It's so grossly dumb - giant native applications running a hidden web browser to show a webpage that does it utmost to pretend to be a native application (but it doesn't work - Alt-F to launch the File menu in slack but cursor left/right doesn't take you to the next menu like REAL native apps do)....

It's a future I do not wish to be involved with!


It's the like saying about how giving developers 2 weeks worth of work to do in a month's timeframe, and watch it take a month: give web developers unlimited memory and they will find a way to make their desktop applications eat all of it.

I'm occasionally tempted to add RAM, but every year my OS and applications get more memory-efficient so I have less of an excuse to spend any money on that. I've got 16GB and it's getting more sufficient all the time! The latest compiler upgrades make it twice as fast as last year, on the same hardware. Life is pretty good.

(Avoiding hardware upgrades isn't all peaches and cream. My graphics cards are getting less acceptable all the time, and I'll probably need to upgrade this year.)

I don't use Chrome or Slack or any of those others. The easiest way to use less memory is to not run programs which require a bunch of it. People who drive Ferraris don't complain about their fuel economy.


You can change the startup parameters of IntelliJ to reduce the maximum memory usage. If you don't it will optimize for least GC time rather than least memory usage and maximum usage will be 75% of total memory.

With java 12 you'll be able to tweak G1 to minimize footprint without setting an arbitrary maximum.

https://openjdk.java.net/jeps/346


I bet ZGC would also be pretty great for this use case as it is super conservative.

Unused memory is wasted memory, they say.

Yeah, software takes more ram, so when your ram is full your computer is slowed (because it probably doesn't need only 100% if your ram), therefore you buy more ram. Now guess what? your ram is not at 100% anymore, your unused memory is wasted anyway, so why not use it?

Repeat the cycle.


Not exactly -- more like no matter how much RAM you have, it makes sense for your software to use it all rather than disk.

It means that however much you have, your system should use as much of it as possible. Otherwise it's just sitting there doing nothing. Contrary to popular belief, the more RAM you're using, the faster your system will be (assuming it's using the RAM efficiently)


>Contrary to popular belief, the more RAM you're using, the faster your system will be (assuming it's using the RAM efficiently)

Is this a joke? If I have 16GB of RAM but my programs use 18GB then my system will suddenly become faster?


Free memory is always used by disk cache.

On Linux, not all of it, in my experience. If you have a lot of memory, some of it may be completely free.

It should gradually trend towards using the vast majority of the free space. The disk cache won't consume memory unless you're reading new, not-already-cached pages from disk; so if you're not reading enough data from disk, then it might not consume everything. (E.g., if you've rebooted recently, and just haven't read enough from disk to consume everything.)

If you work w/ removable media, IME the cache will free the pages associated w/ that media when it's removed (as the cache must be assumed invalid when/if the media is plugged back in). Playing a DVD, and then ejecting it, would often result in a large drop in disk cache. (Back before Netflix…)


Only a very small percentage is kept "free". Everything else should be made available to buffer cache or the application, etc....


are you complaining about cache data that makes the system run faster? I would expect any system that manages cache data to fill most of your RAM.

Most task/activity monitors don't consider OS cache memory as "used".

I'm not sure exactly what you are saying, but what I am saying is that Chrome and other applications use either their own, or OS mechanisms, to cache read pages. This is visible on all systems I work with (mac, win, linux), either attributed to the process, or to OS read cache.

>I would expect any system that manages cache data to fill most of your RAM.

>I'm not sure exactly what you are saying

> This is visible on all systems I work with (mac, win, linux), either attributed to the process, or to OS read cache.

My response was directed at your part of the comment:

>I would expect any system that manages cache data to fill most of your RAM.

All systems do that, but they don't look "used" (eg. on the windows memory graph, the dark region excludes os cached memory). Unless using free[1], no reasonable person would think 99% of their ram's being used.

[1] https://www.linuxatemyram.com/


I use free. Pretty much everybody I know who uses unix knows this. However, more importantly, you can't really show per-process cached ram and sum it up properly (since multiple processes can cache the same disk reads). Not really an issue in Chrome, which manages its own RAM cache (and is the system I was referring to here).

I mostly get by with wee-slack in a terminal. No in-line image support... But that can also be a feature.

https://github.com/wee-slack/wee-slack


I feel like I'm going crazy here. Wee-slack looks exactly like IRC did 20 years ago, except IRC was free. This is madness.

Maybe if IRC people would have been able to evolve the system to add persistent history we wouldn't be where we are now. And no, sshing to a server to run irssi in screen is not, and never was, an acceptable answer.

Not until you wrap it in electron anyway.

hacking up and rebranding an IRCd is a tried and true startup method. Napster, Slack, Discord...


I feel like there’s some cultural problem that google has which prevents them from writing decent end user software.

do you have a real reason you need to run chrome?

You named your own problem: you are using google technologies (chrome, android, java)

Use more sensible tools and you will not waste as much ram (firefox, visual studio code, .net)

Edit: Numberwang, apparently your account is dead so I can't reply anywhere but here.

Edge is great. I can't complain about its memory usage. Unfortunately with Microsoft moving to chrome engine, I am taking the precautionary measure of moving to Firefox until they have shown they can make Chrome suck less. Firefox is not as good as Edge for my needs, but quite close.


Java is not inherently a memory hog. I used to work for a game developer which will remain unnamed, all their stuff was Java based, and they used 64-128 MB. That included a full 3D engine and textures.

Java's boxing (you can't put an int in a HashMap without first copying it to the heap) and lack of value types (basically the same problem for structs) does impose some extra object references and headers other languages wouldn't require.

It's not that much overhead and Java has successfully been used in the embedded space for a long time. (The JVM is separate from the Java language.) From today's wiki page on embedded java, "Embedded Java minimal requirements start at 30KB of (internal) flash and less than 2KB of (internal) RAM." Sure sometimes you can't even meet that, but that's hardly Java's fault.

IME, it's not so much the amount of memory¹, as the pressure it places on the GC to keep up w/ these (often small, but numerous) allocations. Other languages can allocate and deallocate these types of structures with just a stack pointer move in both cases, since nothing ever leaves the stack. (I think Java's allocator is typically approximately or exactly as cheap as a pointer-move on allocation, but deallocation is another story.)

¹but I don't want to downplay this concern entirely; in the parent's case of an int needing boxing, that effectively doubles the cost: the int, and now the pointer to that int. (Plus any hidden requirements of the allocator, but even if we assume that's free, the story might still be plenty grim.)


>that effectively doubles the cost: the int, and now the pointer to that int.

Empty objects aren't free either, so you pay triple.


FireFox is not an option. It is an order of magniturd slower than Chrome.

Does anyone know anything about the memory usage of Edge?


You might want to try updating Firefox if you think it's an order of magnitude slower.

At least in JS scenarios, the difference is nowhere near that. It's actually trading the first place with chrome between tests: https://arewefastyet.com/

It's 2019 now. Try Firefox again. For me it feels faster than Chrome. And it has a reader view. That's on Linux and on Windows. On Mac, I admit, Chrome feels faster. But you have Safari.

Firefox's speed depends a lot on which extensions you use.

I use a bunch, like uBlock Origin, uMatrix, Tridactyl, Tree Style Tab, and Stylus, and Firefox slows down to a crawl for me when I have more than about 8 tabs open.

Now, some might advise not to use extensions, but using extensions is half the reason that I use Firefox at all in the first place, as Firefox without extensions really sucks.


I use uBlock Origin, uMatrix and Greasemonkey, and FireFox is nice and fast with several dozen tabs open for weeks at a time.

Slack leaks memory like there's no tomorrow, but that's on Slack. I just close the Slack tab and force a full GC/CC cycle and I get my memory back.

I'd do a binary search of sorts, disable half of the extensions and see if performance is still bad or not etc, to narrow it down to the offending extension(s).


> Firefox without extensions really sucks

The only extension I use is uBlock Origin. I've never felt the need for anything else whether it's on Firefox or Chrome.


Even at it's slowest, Firefox was never even close to an order of magnitude slower.

Geek culture has zero loyalty, otherwise people would run the product that has moral integrity even if it costs them a few milliseconds in load times.


>FireFox is not an option. It is an order of magniturd slower than Chrome.

This is objectively false.


> FireFox is not an option. It is an order of magniturd slower than Chrome.

False.

It's faster and lighter than Chrome on my machine.


Anecdotally, looks like this is somewhat reflected by Amazon prices for consumer RAM as well (though not quite "freefall"):

https://camelcamelcamel.com/Corsair-Vengeance-3200MHz-Deskto...



I built my last dekstop in 2015, 16 gb of ram were cheap. Then prices went ballistics. I am glad it's over, when 32Gb of ram cost like a new CPU you know something is wrong.

About time. I've noticed ECC DDR3 prices are falling quite a bit in the used market, which is great for my homelab.

Im running an old opencompute windmill server I picked up when facebook surplussed all of theirs, cant recommend the thing enough. 2 nodes with dual 8 core xeons with HT each, 10GB networking and I've stuffed them with ~256GB of ram, pcie based nvme ssd, and RX480s. Use one as a vm/container host and the other as a workstation.

Its more like a freight train than a sports car, but i routinely see Chrome taking up 57 GB of ram on the node I use as my workstation and kinda just laugh.


How are you measuring the amount of memory Chrome takes? Do you calculate PSS somehow? It is quite non-trivial to determine this number, due to the complexity of memory allocation.

Roughly adding up the amount of memory that all the Chrome processes seen in the Windows task manager report using.

In hindsight I should have clarified that this was not intended to be a dig against chrome. I've frequently got hundreds of tabs open, because with the amount of memory I've got, why not?


Hundreds of tabs shouldn't consume significantly more than 2GB unless you are running demanding webapps.

How much was it? What’s the powe consumption like?

Looks like ESISO still has some for sale for $381.00[1]. You'll still have to get storage, more memory (if you want), and a gpu.

The single PSU of the chassis that supplies power to both nodes does require 220V power, but I solved this easily enough with a 2000 watt step up transformer off of amazon. I've also jerry rigged up an additional ATX power supply as the native chassis psu doesn't have 6 pin power connectors to power the GPUs.

I've never calculated how much power the thing pulls under full load, but considering that I've never had any instability I've got to assume it's less than the 2000 watts tha transformer can handle + the 400 watt ATX psu for the GPUs. I've got flat rate electric from my landlord, so it's essentially free to run for me.

[1] https://www.ebay.com/itm/QUANTA-OPEN-COMPUTE-SERVER-2x-NODES...


The newer winterfell nodes [1] have made their way to ebay at sub 100$ and do not have the exotic power requirements of the earlier dual sled windmill nodes. You can easily power them off of a converted server power supply via diy [2] or commercial [3] means. Windmill nodes have more space on the pcie riser and quieter fans going for them as well.

1) https://www.ebay.com/itm/Quanta-Winterfell-Barebone-Node-2x-... 2) http://colintd.blogspot.com/2016/10/hacking-hp-common-slot-p... 3) https://www.parallelminer.com/product/breakout-board-adapter...


Oh nice, I think I have all the DIY power stuff already from my old ethereum mining farm. At $100 we're well into impulse buy territory.

It's hard to go wrong for price/performance when building a server out of off-lease enterprise gear, especially with ECC DDR3 being dirt cheap. People might think this means loud rackmount servers, but you can find boards for most form factors.

There's a small scene for people who do this and share builds, so it doesn't take much research to get started.


> There's a small scene for people who do this and share builds, so it doesn't take much research to get started.

Recommendations on sites/forums?


https://www.serverbuilds.net/ which is an offshoot of https://www.reddit.com/user/JDM_WAAAT They also have a Discord.

r/homelabs is also good, but it can be more rackmount focused as far as gear recommendations.


I just found this subreddit yesterday, seems right up your alley.

https://www.reddit.com/r/homelab/


Does it make sense to run 128GB of non-ECC? DDRAM is something like 1 bit error per 40 hours per gig [1]. At 128GB, you have 537k of corruption after a week of uptime, assuming you're utilizing it all.

[1] https://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf


Why do you need RAM for chemistry experiments?

RAM has been a terrible business for as long as I can remember (since the 70s). Intel was originally in the RAM business and got out of it early because it was so horrible. I honestly don't know why anyone even bothers (though of course I'm glad they do!).

Don't tell Samsung and Micron, they've been printing extraordinary profits the last two years in the memory segment.

Micron went from being a basketcase for years and a relatively cheap acquisition target for the Chinese (state owned Tsinghua Unigroup tried to buy them in 2015). To generating a $5b profit in 2017 on $20b in sales. In 2018 that skyrocketed to $14b in profit on $30b in sales, epic margins.

Micron's balance sheet improved from a positive $11.8b in net tangible assets, to $33.8b in the latest quarter. They cut their long-term debt from $7.1b to $3.3b over that time.

Samsung has an even larger memory business than Micron. I wouldn't be surprised if they cleared $18-$20b in profit in 2018 on memory alone.

Amazing things happen to pricing and margins when you consolidate an important segment down to only three players. If you slash Micron's 2018 profit by 50%, they'd still only be sporting a six PE ratio presently and would still generate $7b in profit. Memory will be a tremendous business until / unless the Chinese bust up the party (as they hope to in the coming years).


My point is that you're describing what makes the market so terrible. It's a commodity product that's stamped out in high volume. The capex is high but marginal cost is low.

So you get whipsawed by price swings, plus when prices are high, someone with cheap access to capital (either through government support or simply a period of secular low rates, as today) can build a factory and demolish whatever pricing you're getting on your chips.

This is precisely what made Micron a basket case.

If you look at the history of the DRAM business it's been a continuous, expensive string of collapse and recovery. Where are the Japanese giants who used to dominate the business? Where are the US titans that used to exist (Micron itself was a late entry thanks to Simplot's capital). Etc. Think of the steel business; DRAM is no different except its cycles are faster.

25 years from now we'll all be running cheap East African DRAM from factories funded with oil money and built in regions of low cost land and lax safety & environmental standards.


and as soon as Intel fabs closed prices soared and Japanese manufacturers were again making a killing on ram.

Sounds like it's time for another round of price fixing.

Hopefully this time they'll just agree to stop making lossy RAM (Rowhammer), rather than having another fab fire.


How many customers are going to pay more for Rowhammer-proof RAM?

If it's not enough to offset the cost of developing it, I'm not sure why RAM manufacturers would make it.


I mean yes that's the general problem with all counterfeit products - if they're "good enough" there's no economy of scale for the non-counterfeit, so it becomes even more expensive. My point was that if manufacturers are seeking to keep prices from plummeting, then perhaps they can agree to stop competing so hard on price that they're pushed into undermining the functionality of their products.

These things only happen when I have all my memory sockets full :-(

Hearing stuff like this really makes me want to build a PC, but I just know that as soon as I do, I'm gonna be Moore's Law'd so hard. I simply can't justify it given how much money is involved.

We're well past those days, (un)fortunately. My 2015-built i4790k/1080/SSD that I use at home feels exactly the same as the more modern desktops I use at work. Even the SSD, which sees heavy use, has burned through less than 10% of its expected lifecycle. The only thing I've had to replace are the case fan and the CPU fan, just from age/use, but looking at the horizon, I see no reason I couldn't use the machine itself for another 4 years without issue.

Computers used to become twice as fast every 18 months. Now it takes about 7 years.

I'm still using one I built in 2009. CPU, motherboard, and PSU are really the only original components anymore, so maybe it's not a great comparison, but it's still a great machine that can do heavier lifting than my maxed out Surface Book. There are some bottlenecks that's going to force me to rebuild soon though. I love how we can now get a decade from one build with minor upgrades every few years.

You've got a bit of an edge case... Haswell is the last machine I'd consider modern. Go back one generation and Ivy Bridge is missing AVX2 and loses performance in a number of applications.

Haswell is the oldest CPU that supports AVX2, DDR4, quad-core standard, probably has USB3.0 slots and PCIe 3.0.

"Modern" machines like Skylake have increased to 6-core standard (even for an i5) or more (i7 and AMD Zen are 8-cores now). Those extra cores help with compiles / 3d blender renders / some other tasks, but its not a major improvement otherwise. USB 3.1 Type C isn't a big change from USB 3.0, and PCIe 3.0 and DDR4 are still mainstream.

In as little as a year however, things seem to be changing. Intel Icelake will be adding AVX512 to the mainstream later this year. We'll see whether or not consumers get an uptick in AVX512 usage, or if Adobe / other programs continue to push GPGPU instead.

With DDR5 being released in 2019 (with doubled bandwidth compared to DDR4) and PCIe 4.0 beginning adoption, and with PCIe 5.0 and USB4 standards being released, it seems like a major upgrade cycle will occur in the 2020 to 2021 timeframe. (It seems like the Desktop market is skipping PCIe 4.0 and just going straight to PCIe 5.0, quadrupling I/O bandwidth in one jump)

---------

Laptops are still waiting for LPDDR4 support. Kinda sucks for them. Intel really feels like they stalled out with all of these 10nm issues they've been having. A lot of their technology (ie: Ice Lake) was delayed due to the 10nm issues.


> You've got a bit of an edge case... Haswell is the last machine I'd consider modern. Go back one generation and Ivy Bridge is missing AVX2 and loses performance in a number of applications.

(acknowledging that this website is for a particular subset of computer users) I think this comment misses the point of the original comment. I use a desktop I built in 2011 (i5 2500k) as my main driver - I upgraded to an SSD and a RX580 recently, and it performs fantasticly. It has USB 3, which is iomportant to me for bulk file transfers to removable media. It plays any game I have thrown at it yet. I may upgrade the CPU/ram/mobo at some point, but haven't felt the need yet.

I'm sure my computer would be pretty terrible compared to a new i7 at the particular CPU-intensive tasks you mentioned (encoding, rendering, compiling...), but for everyday use and gaming it can't be faulted in the slightest. Very impressive for an 8-year-old CPU. Even 3 years ago, an 8-year-old computer would've been trash (core2 duo, ew)


There are some poorly programmed games like Arma 3 that are heavily single core bound.

Haswell actually uses ddr3 unfortunately, and it's becoming a bottleneck for me personally in a few cases now. Otherwise I totally agree though.

It is not as bad as it used to be, if you want to build a gaming PC today it will most likely last from 5 to 6 years without needing any upgrades, most upgrades are barely noticeable if you already use a decent graphic card and NVMe SSD.

If you have no desire to use it as a gaming rig but simply want a beast for compiling your code/video editing then it should last even longer.


New CPUs aren't much faster than the old ones. Going from an i7-6700k to 7700k to 8700k to 9700k gets maybe 10% faster single threaded performance on each step. GPUs are starting to show a similar story. Those Geforce RTX cards barely faster on a dollar-per-dollar basis than their predecessors. Unless you need the transfer speeds, those hot new NVME SSDs aren't noticeably faster than SATA SSDs.

I would building a PC right now, but the one I built two years ago is fine.


> Going from an i7-6700k to 7700k to 8700k to 9700k gets maybe 10% faster single threaded performance on each step

Sure, but you can get CPUs with 2-4x the number of cores at consumer price points, which is great for people who do streaming or virtualization.

> hot new NVME SSDs aren't noticeably faster than SATA SSDs

Oh yes they are, and I'd never go back


"Sure, but you can get CPUs with 2-4x the number of cores at consumer price points, which is great for people who do streaming or virtualization."

They're also great for people like me who use distros like Gentoo where we're regularly recompiling all the software we use as new versions become available.

My entire recompile cycle now takes literally weeks of non-stop compilation, on my old, slow 2-core laptop. Compiling qtwebkit alone takes several days. If I had 8 or 16 cores on a modern processor, that would be a massive improvement.


If you have a case, you can build a more than adequate PC for development work for less than $300. That's assuming 16 gigs of RAM and a Ryzen APU.

No way, you're only getting the ram at that price point.


The 2200G comes with the Wraith cooler, which I believe is actually pretty good for a stock cooler. So you should be able to get away without the Coolmaster, saving some additional cash :) [Unless your prices are CPU only] Edit: You would probably need to put the savings into a PSU though!

It's not quite that expensive. I paid less than $200 a few weeks ago for 32GB of SODIMM to go in a NUC. I think $300 for all the guts of a machine is probably pretty optimistic, but the RAM would only be about a third of it.

It probably never makes sense to build a computer. It’s normally more expensive and if something stops working, it can be hard to isolate the part at fault and RMA it. Buying a prebuilt PC with parts you know are compatible with your OS generally gives you a warranty so that you can simply bring it back instead of wasting more time and money fixing the issue yourself.

<edit> I’m surprised to learn how differing others’ experiences are, especially on the price of the overall system. When I bought my most recent desktop, I typed in the make and model of every part into PCPartPicker and learned that I would be saving over $100 by going pre-built.

There were some fair points made down thread about the PSU and mobo, which I hadn’t really considered. My RMA experiences with a few hard drives versus the exchange process at my local Micro Center (they swapped the whole thing out even though I was well past the return period) were night and day, and many of you must have had the opposite experience to take the stance that you did.

Regardless, I find it interesting how polarizing this issue is and I probably shouldn’t have written my original comment in such a provocative way (although it didn’t seem that way at the time). </edit>


This is not really true. On both of the computers I built recently, every component was warrantied and in some instances I have even warranty-returned a product of theirs in the past which was one of the reasons why I selected parts from those manufacturers over others.

If you know how to troubleshoot effectively, many issues are not that hard to isolate. There are some that are very time consuming to isolate but those issues tend to be pretty rare (troubleshooting memory etc.).

The nice thing about building a computer is just to be able to configure it exactly to what you want. In some cases some retailers do screwy things and/or send customers the wrong parts although there are some top flight retailers who don't do that and charge a premium for it.

It's not necessarily more price competitive for some things but there are certain key components that retail PC companies tend to cheap out on unnecessarily like the PSU and the motherboard. Cheaping out on the PSU can give the retailer a lot of extra margin but a good PSU is so cheap that it does not make that much of a difference in final price to the user.

If you learn a bit more about things like case airflow or are planning to overclock a lot of things that retail PC sellers do will drive you nuts so once you are at a certain knowledge level it is just better to build your own despite all the many petty aggravations and frustrations that can occur with it. All the annoyances tend to be temporary whereas the benefits last for years.


Wut?

Maybe for general purpose computers doing nothing special but its generally more cost effective to select and build your own PC for things like gaming.

Beyond that, computers are pretty robust. Its pretty hard to select parts that are incompatible assuming you get a CPU that fits the socket on the motherboard you buy and you don't go cheap on your PSU. Lots of tools (and communities) exist to make sure you're selecting parts properly (not overdoing it on the CPU when you're getting a low tier GPU for instance). I'm not sure what your OS comment implies because virtually everything will work on most widely adopted OSes (and if you're running a pretty obscure linux distro I'd assume you're familiar with driver woes).

I'm more than happy to be corrected, but if you're buying prebuilt you're likely paying for the parts and the labour involved. Unless the company is getting steep discounts on volume purchases (which they are only getting for bottom tier parts) you're not saving money by having them put it together for you.


My experience suggests otherwise. System Integrators use the same parts I like but often get them for a discount.

I build my own systems because I like the constant incremental upgrade model. But when I recently had to help someone purchase a system I found it was hard to beat si prices with equally spaced hardware.


I recently dismantled 3 generations of pre-built PC's to remove the hard drives. One computer needed a 90-degree angled Phillips head to release the removable tray... ended up using a hammer to smack it out...

I built my current computer and don't need a screwdriver to open the case, or to replace the SSD, HDDs, GPU, or RAM. They all just pop out now! First computer I've upgraded too.


Could you name some of these companies with prices competitive with building yourself? I'm in the market for a new PC when Zen 2 comes out. I would prefer to buy the computer instead of assemble it myself, but I've never found the prices to be competitive.

I worked with a friend who purchased from iBuyPower (shudder). But the experience turned out fairly well.

But Corsair, Falcon NW, and CyberPower are others.


That's true, if you have a complete failure. If you have a partial failure, self-build can be better, because you can pull the failed part and run with a degraded system while the RMA processes, then install the RMA. With pre-build, you're sending the whole system back and are without a computer for 5-10 days.

It seems experiences vary wildly, and I'd assume that's based on differing criteria.

For a general purpose / MS Office / browsing desktop, going pre-built may be viable. For any more specific / demanding usage, building a computer yourself will allow you to get the performance you need for the price you want.

I'm decade or more past the point of enjoying building a new PC, but a pre-built machine never seemed to make financial or use-case sense for me. YMMV :)


When you build your own PC, you can decide which parts to spend more money on, and on which to skimp. With pre-built you often have little choice, or the choices are very limited or very expensive. Typical example is memory, where the pre-built usually come with very little memory, and increasing memory costs much more compared to the retail prices of memory modules.

2 reasons why it happens:

The industry, the speculators, and the guidance from "smart investor people" were massively overoptimistic about the AI and mining trends. All those billions pumped into those niches, failed to translate into datacentre spendings. Server sales very likely went down in 2018.

And the same for mobile – "AI" phones happened to be a bad sell.


In windows and in linux(it might be the similar case with OSX also, I haven't tried it), I find it is not easy or straightforward to set RAM usage limitations on processes, I wonder why do not the OSes provide the user easy controls on that.

You can but you'd hate the results, since the OS would have to page memory to/from disk like crazy when the working set runs out. See:

https://blogs.technet.microsoft.com/mrsnrub/2009/12/08/windo...

Or:

https://docs.microsoft.com/en-us/windows/desktop/api/memorya...

You can call the Win32 API directly from Powershell if you wish (w/QUOTA_LIMITS_HARDWS_MAX_ENABLE). System Resource Manager is only available on Windows Server.

All you're doing is limiting how much physical memory the process can consume before it pages. In either case the process will continue to consume all the memory it desires, it just varies where the additional consumption is stored.


that will replace slowness with crashes, as very few programs are written to survive being told that no more memory is available.

I think crashes already happen but the process that gets killed may not be the one that caused the system to run out of memory in the first place.

[flagged]


Maybe so, but please don't post unsubstantive comments here, and especially not dismissive ones.

If you know more than someone else, share some of your knowledge so we call learn. Or, if you don't have time or don't want to, simply don't post.

https://news.ycombinator.com/newsguidelines.html


What I learned from the comments for this post: don't use Slack.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: