Prices are coming down to previous levels, but we're not even there yet. To say they are in freefall is like saying sofa prices are in freefall for being $900 after they priced it $1500 after initially offering it for $600. Except that sofas don't get cheaper every year and electronics typically do, so really, RAM prices are still sky high.
 https://ic.tweakimg.net/ext/i/?ProduktID=458074&deliveryCoun... (PNG)
It’s great to clarify that prices are still higher than previously, though. I wouldn’t have known that otherwise.
It is in FreeFall because the price drop have no sign of slowing. I am just surprised that DRAM still haven't found a price floor somewhere along, it seems Server consumption of DRAM aren't as high as the DRAM in Mobile Phone and PC combined.
Both DRAM and NAND has been the major cost component in Servers, hopefully we see some price reduction in Cloud Computing coming soon.
just from sampling a handful it looks like some have dropped below their 2016 levels, others have not.
Followed by Android Studio (3GB for a medium project), IntelliJ idea (another 3GB for a smallish project + 15% cpu constantly)
If prices keep going down i'll just put 128GB and call it a day.
Slack takes more than half a Gig with only 5 teams. It's a chat app, what the F?
- Android Studio itself: default heap size of 1280MB plus ~400MB of metaspace
- Gradle: That's the part which surprised me the most:
- Most projects probably have a custom heap size set for Gradle. Turns out: If you're using Kotlin the same amount of heap size is also used for the Kotlin compile daemon. So if you configure -Xmx2G you'll end up with 2GB for the Gradle daemon and 2GB for the Kotlin compile daemon. The heap size for the Kotlin compile daemon can be separately configured by setting `-Dkotlin.daemon.jvm.options=-Xmx<xxx>`.
- Gradle 5.0 is way more memory efficient than the versions before and they even drastically lowered the default heap sizes.
- If you still have a custom heap size in your `dexOptions` set: that's not recommended anymore.
That's one of the reasons why some bloated contemporary desktop software (i.e. Slack) feels as fast as its historic counterpart did in 1995 (i.e. our MSN/ICQ clients back in these days).
Side note, lessons from HPC aren't really that applicable for chat programs.
We increase memory usage for faster software development.
I’m totally with you for things that are actually doing serious numeric/algorithmic work. Matrix inversion runs a helluva lot faster when the whole matrix fits in RAM. And there are undoubtedly lots of places where you can use different algorithms for the same problem that can trade off memory usage and computational complexity.
Adding memory often makes an entire system faster (again, because you can cache things in the faster RAM rather than paging out to disk). But for two comparable programs, I could see the argument that the lower-memory footprint program would be faster. (For comparable programs).
Often the memory being used is doing things like pre-loading information from disk or the network before it's needed, or caching results for reuse.
Or it's storing the information in a faster but less dense data structure - kind of like denormalizing a database for performance reasons.
Historically, under some circumstances, a problem compiled with -Os, may paradoxically be faster than -O2, and -O2 may be faster than -O3. I think the situation is getting better now, and the optimizing level mostly works as advertised. But there's still some problems in automatic loop vectorization.
The solution is to use abstract, arbitrary vectors in the ISA instead of packed SIMD and unrolled loops. RISC-V gets this right with their V extension.
This is just not true in a lot of ways.
For example, you can use memory to store the same data in multiple ways, and then you get blistering fast access, whereas if you only store it in a single way, you may get a range of read performance between "as good" to "really really bad".
I understand why what you said sounds true, but it's not that simple at all.
A very common tradeoff we make is to cache the results of time consuming calculations in memory, or even cache raw data in memory to avoid database calls. Both of these techniques increase memory usage, but improve performance.
Not if the alternative to reading/writing from RAM is reading writing from disk. One piece of data processing software I used to use had its roots in the early 80s and was very memory efficient. It achieved this by rewriting the input data to disk in a clever way and the reading and writing just the parts it need. I could process gigs of data using only a couple of hundred MBs of RAM. A different, more modern, program I also used would slurp in as much data as it could fit in RAM and then just go from there. It would happily use 30 of the 32 GB or RAM I had every time it ran, but it was also 1-2 orders of magnitude faster.
Hi, there's this guy named 'cache' that wants to speak with you.
Because it's an Electron app, i.e. another copy of Chrome.
Actually for all the money they've raised I can't understand why the Slack apps are so terrible. I use Mac and iOS versions, and they still can't show useful activity badges; video doesn't work on ios...what are the spending their billions on?
As long as the software doesn't require all that memory, but can work OK with less and then scale up to use more if you've got it, I guess I don't see what the problem is.
It actually can. It's called swapping. An amazing technology from the 90s.
That's exactly the point of swapping. Memory which was not used in a (relatively) long time is taken away forcefully from the process if there is memory pressure on the system.
So it doesn't matter if Chrome releases it or not, it will be taken away if it just sits there.
To disk. Waiting for the memory a badly behaved app overzealously ate up without using to finish flushing to disk before being able to use it is worse than simply having the memory readily available. This is why I always disable swap, opting to let bad processes die, sometimes even pulling the trigger myself.
In modern systems, paging is okay. But not swapping.
From the 60s. Atlas computer had virtual memory. I originally thought it was the System360.
Yes, modern operating systems use free RAM as cache for the HDD. If you have an I/O intensive process, having a large amount of RAM is really helpful.
For that to work applications shouldn't be gobbling up all the ram, instead they should leave it to the OS to cache data.
Sure, NVMe drives help, but loading things from RAM is snappier still.
Also, those huge memory footprints themselves can also cause issues. E.g. they can increase GC pauses in GCed languages or decrease cache hit rates when allocations are scattered all over the place.
Cut corners and offload the trade-offs onto your end users.
It's so grossly dumb - giant native applications running a hidden web browser to show a webpage that does it utmost to pretend to be a native application (but it doesn't work - Alt-F to launch the File menu in slack but cursor left/right doesn't take you to the next menu like REAL native apps do)....
It's a future I do not wish to be involved with!
(Avoiding hardware upgrades isn't all peaches and cream. My graphics cards are getting less acceptable all the time, and I'll probably need to upgrade this year.)
I don't use Chrome or Slack or any of those others. The easiest way to use less memory is to not run programs which require a bunch of it. People who drive Ferraris don't complain about their fuel economy.
Repeat the cycle.
It means that however much you have, your system should use as much of it as possible. Otherwise it's just sitting there doing nothing. Contrary to popular belief, the more RAM you're using, the faster your system will be (assuming it's using the RAM efficiently)
Is this a joke? If I have 16GB of RAM but my programs use 18GB then my system will suddenly become faster?
If you work w/ removable media, IME the cache will free the pages associated w/ that media when it's removed (as the cache must be assumed invalid when/if the media is plugged back in). Playing a DVD, and then ejecting it, would often result in a large drop in disk cache. (Back before Netflix…)
I repeat http://www.ultratechnology.com/lowfat.htm
>I'm not sure exactly what you are saying
> This is visible on all systems I work with (mac, win, linux), either attributed to the process, or to OS read cache.
My response was directed at your part of the comment:
>I would expect any system that manages cache data to fill most of your RAM.
All systems do that, but they don't look "used" (eg. on the windows memory graph, the dark region excludes os cached memory). Unless using free, no reasonable person would think 99% of their ram's being used.
hacking up and rebranding an IRCd is a tried and true startup method. Napster, Slack, Discord...
Use more sensible tools and you will not waste as much ram (firefox, visual studio code, .net)
Edit: Numberwang, apparently your account is dead so I can't reply anywhere but here.
Edge is great. I can't complain about its memory usage. Unfortunately with Microsoft moving to chrome engine, I am taking the precautionary measure of moving to Firefox until they have shown they can make Chrome suck less. Firefox is not as good as Edge for my needs, but quite close.
¹but I don't want to downplay this concern entirely; in the parent's case of an int needing boxing, that effectively doubles the cost: the int, and now the pointer to that int. (Plus any hidden requirements of the allocator, but even if we assume that's free, the story might still be plenty grim.)
Empty objects aren't free either, so you pay triple.
Does anyone know anything about the memory usage of Edge?
I use a bunch, like uBlock Origin, uMatrix, Tridactyl, Tree Style Tab, and Stylus, and Firefox slows down to a crawl for me when I have more than about 8 tabs open.
Now, some might advise not to use extensions, but using extensions is half the reason that I use Firefox at all in the first place, as Firefox without extensions really sucks.
Slack leaks memory like there's no tomorrow, but that's on Slack. I just close the Slack tab and force a full GC/CC cycle and I get my memory back.
I'd do a binary search of sorts, disable half of the extensions and see if performance is still bad or not etc, to narrow it down to the offending extension(s).
The only extension I use is uBlock Origin. I've never felt the need for anything else whether it's on Firefox or Chrome.
Geek culture has zero loyalty, otherwise people would run the product that has moral integrity even if it costs them a few milliseconds in load times.
This is objectively false.
It's faster and lighter than Chrome on my machine.
Its more like a freight train than a sports car, but i routinely see Chrome taking up 57 GB of ram on the node I use as my workstation and kinda just laugh.
In hindsight I should have clarified that this was not intended to be a dig against chrome. I've frequently got hundreds of tabs open, because with the amount of memory I've got, why not?
The single PSU of the chassis that supplies power to both nodes does require 220V power, but I solved this easily enough with a 2000 watt step up transformer off of amazon. I've also jerry rigged up an additional ATX power supply as the native chassis psu doesn't have 6 pin power connectors to power the GPUs.
I've never calculated how much power the thing pulls under full load, but considering that I've never had any instability I've got to assume it's less than the 2000 watts tha transformer can handle + the 400 watt ATX psu for the GPUs. I've got flat rate electric from my landlord, so it's essentially free to run for me.
There's a small scene for people who do this and share builds, so it doesn't take much research to get started.
Recommendations on sites/forums?
r/homelabs is also good, but it can be more rackmount focused as far as gear recommendations.
Micron went from being a basketcase for years and a relatively cheap acquisition target for the Chinese (state owned Tsinghua Unigroup tried to buy them in 2015). To generating a $5b profit in 2017 on $20b in sales. In 2018 that skyrocketed to $14b in profit on $30b in sales, epic margins.
Micron's balance sheet improved from a positive $11.8b in net tangible assets, to $33.8b in the latest quarter. They cut their long-term debt from $7.1b to $3.3b over that time.
Samsung has an even larger memory business than Micron. I wouldn't be surprised if they cleared $18-$20b in profit in 2018 on memory alone.
Amazing things happen to pricing and margins when you consolidate an important segment down to only three players. If you slash Micron's 2018 profit by 50%, they'd still only be sporting a six PE ratio presently and would still generate $7b in profit. Memory will be a tremendous business until / unless the Chinese bust up the party (as they hope to in the coming years).
So you get whipsawed by price swings, plus when prices are high, someone with cheap access to capital (either through government support or simply a period of secular low rates, as today) can build a factory and demolish whatever pricing you're getting on your chips.
This is precisely what made Micron a basket case.
If you look at the history of the DRAM business it's been a continuous, expensive string of collapse and recovery. Where are the Japanese giants who used to dominate the business? Where are the US titans that used to exist (Micron itself was a late entry thanks to Simplot's capital). Etc. Think of the steel business; DRAM is no different except its cycles are faster.
25 years from now we'll all be running cheap East African DRAM from factories funded with oil money and built in regions of low cost land and lax safety & environmental standards.
Hopefully this time they'll just agree to stop making lossy RAM (Rowhammer), rather than having another fab fire.
If it's not enough to offset the cost of developing it, I'm not sure why RAM manufacturers would make it.
Haswell is the oldest CPU that supports AVX2, DDR4, quad-core standard, probably has USB3.0 slots and PCIe 3.0.
"Modern" machines like Skylake have increased to 6-core standard (even for an i5) or more (i7 and AMD Zen are 8-cores now). Those extra cores help with compiles / 3d blender renders / some other tasks, but its not a major improvement otherwise. USB 3.1 Type C isn't a big change from USB 3.0, and PCIe 3.0 and DDR4 are still mainstream.
In as little as a year however, things seem to be changing. Intel Icelake will be adding AVX512 to the mainstream later this year. We'll see whether or not consumers get an uptick in AVX512 usage, or if Adobe / other programs continue to push GPGPU instead.
With DDR5 being released in 2019 (with doubled bandwidth compared to DDR4) and PCIe 4.0 beginning adoption, and with PCIe 5.0 and USB4 standards being released, it seems like a major upgrade cycle will occur in the 2020 to 2021 timeframe. (It seems like the Desktop market is skipping PCIe 4.0 and just going straight to PCIe 5.0, quadrupling I/O bandwidth in one jump)
Laptops are still waiting for LPDDR4 support. Kinda sucks for them. Intel really feels like they stalled out with all of these 10nm issues they've been having. A lot of their technology (ie: Ice Lake) was delayed due to the 10nm issues.
(acknowledging that this website is for a particular subset of computer users) I think this comment misses the point of the original comment. I use a desktop I built in 2011 (i5 2500k) as my main driver - I upgraded to an SSD and a RX580 recently, and it performs fantasticly. It has USB 3, which is iomportant to me for bulk file transfers to removable media. It plays any game I have thrown at it yet. I may upgrade the CPU/ram/mobo at some point, but haven't felt the need yet.
I'm sure my computer would be pretty terrible compared to a new i7 at the particular CPU-intensive tasks you mentioned (encoding, rendering, compiling...), but for everyday use and gaming it can't be faulted in the slightest. Very impressive for an 8-year-old CPU. Even 3 years ago, an 8-year-old computer would've been trash (core2 duo, ew)
If you have no desire to use it as a gaming rig but simply want a beast for compiling your code/video editing then it should last even longer.
I would building a PC right now, but the one I built two years ago is fine.
Sure, but you can get CPUs with 2-4x the number of cores at consumer price points, which is great for people who do streaming or virtualization.
> hot new NVME SSDs aren't noticeably faster than SATA SSDs
Oh yes they are, and I'd never go back
They're also great for people like me who use distros like Gentoo where we're regularly recompiling all the software we use as new versions become available.
My entire recompile cycle now takes literally weeks of non-stop compilation, on my old, slow 2-core laptop. Compiling qtwebkit alone takes several days. If I had 8 or 16 cores on a modern processor, that would be a massive improvement.
<edit> I’m surprised to learn how differing others’ experiences are, especially on the price of the overall system. When I bought my most recent desktop, I typed in the make and model of every part into PCPartPicker and learned that I would be saving over $100 by going pre-built.
There were some fair points made down thread about the PSU and mobo, which I hadn’t really considered. My RMA experiences with a few hard drives versus the exchange process at my local Micro Center (they swapped the whole thing out even though I was well past the return period) were night and day, and many of you must have had the opposite experience to take the stance that you did.
Regardless, I find it interesting how polarizing this issue is and I probably shouldn’t have written my original comment in such a provocative way (although it didn’t seem that way at the time). </edit>
If you know how to troubleshoot effectively, many issues are not that hard to isolate. There are some that are very time consuming to isolate but those issues tend to be pretty rare (troubleshooting memory etc.).
The nice thing about building a computer is just to be able to configure it exactly to what you want. In some cases some retailers do screwy things and/or send customers the wrong parts although there are some top flight retailers who don't do that and charge a premium for it.
It's not necessarily more price competitive for some things but there are certain key components that retail PC companies tend to cheap out on unnecessarily like the PSU and the motherboard. Cheaping out on the PSU can give the retailer a lot of extra margin but a good PSU is so cheap that it does not make that much of a difference in final price to the user.
If you learn a bit more about things like case airflow or are planning to overclock a lot of things that retail PC sellers do will drive you nuts so once you are at a certain knowledge level it is just better to build your own despite all the many petty aggravations and frustrations that can occur with it. All the annoyances tend to be temporary whereas the benefits last for years.
Maybe for general purpose computers doing nothing special but its generally more cost effective to select and build your own PC for things like gaming.
Beyond that, computers are pretty robust. Its pretty hard to select parts that are incompatible assuming you get a CPU that fits the socket on the motherboard you buy and you don't go cheap on your PSU. Lots of tools (and communities) exist to make sure you're selecting parts properly (not overdoing it on the CPU when you're getting a low tier GPU for instance). I'm not sure what your OS comment implies because virtually everything will work on most widely adopted OSes (and if you're running a pretty obscure linux distro I'd assume you're familiar with driver woes).
I'm more than happy to be corrected, but if you're buying prebuilt you're likely paying for the parts and the labour involved. Unless the company is getting steep discounts on volume purchases (which they are only getting for bottom tier parts) you're not saving money by having them put it together for you.
I build my own systems because I like the constant incremental upgrade model. But when I recently had to help someone purchase a system I found it was hard to beat si prices with equally spaced hardware.
I built my current computer and don't need a screwdriver to open the case, or to replace the SSD, HDDs, GPU, or RAM. They all just pop out now! First computer I've upgraded too.
But Corsair, Falcon NW, and CyberPower are others.
For a general purpose / MS Office / browsing desktop, going pre-built may be viable.
For any more specific / demanding usage, building a computer yourself will allow you to get the performance you need for the price you want.
I'm decade or more past the point of enjoying building a new PC, but a pre-built machine never seemed to make financial or use-case sense for me. YMMV :)
The industry, the speculators, and the guidance from "smart investor people" were massively overoptimistic about the AI and mining trends. All those billions pumped into those niches, failed to translate into datacentre spendings. Server sales very likely went down in 2018.
And the same for mobile – "AI" phones happened to be a bad sell.
You can call the Win32 API directly from Powershell if you wish (w/QUOTA_LIMITS_HARDWS_MAX_ENABLE). System Resource Manager is only available on Windows Server.
All you're doing is limiting how much physical memory the process can consume before it pages. In either case the process will continue to consume all the memory it desires, it just varies where the additional consumption is stored.
If you know more than someone else, share some of your knowledge so we call learn. Or, if you don't have time or don't want to, simply don't post.