With super fast SSDs, does anyone really need 32GB of RAM?
I don't even shut down my multiple JetBrains IDEs and gazillion browser tabs or bloated Slack when I take a break and pin the CPU with Ableton Live and a bunch of soft synths on my MBP. Nothing skips a beat.
Is anyone seriously running into issues with only 16GB of RAM?
Oh yeah! When I showed up at my current employer, I had a laptop with 32gb and 2 PCI SSDs in RAID0. Almost immediately, I had to upgrade to 64GB.
I'm a data scientist and regularly work with multiple datasets simulataneously that require the RAM usage. Both Python and R rely on in-memory processing. Loading on/off disk is substantially slower and does not fit with what I am trying to do. For really large datasets I also have a 28 core Xeon with 196GB that I can remote into, but it is nice to not have constraints on my laptop.
Of course, you could go with Hadoop or Spark to process some of these datasets, but that requires quite a bit of overhead and its easier (and cheaper) to just buy more RAM
Same story for me, give or take a few percent. I have to recommend dask for python though, it made out of memory errors largely disappear for me. It allows parallel processing of disk-size datasets with the convenience of in-memory datasets (almost).
Really? It's been a while since I've used it, and I remember a good portion of the documentation talking about how they replicate some, but not all of Pandas' API's (because of the sheer number of them).
I'm a C++ programmer working in games and I run out of 64GB of ram in my workstation daily. I can't wait until we finally get all upgraded to 128 or 256GB of ram as standard.
Well, thats why we consumer have to buy better hardware and more ram? As an old former Amiga programmer I have always till this day been a less is more kind of guy. Make code run faster and make the program use less ram.
Good in theory unless you need all of the data at once. There are things we do now that wouldn't have been possible (in the same sense) 25 years ago without a lot of work. We might use languages that are 200x slower, but they might be 10x more productive. That's a winning tradeoff for many people.
Nope, it has nothing to do with what you as a customer get as a final product. Loading the main map of the game uses about 30GB of ram in the editor + starting the main servers in a custom configuration will use that amount again. Systems like fastbuild can use several gigabytes when compiling. None of this has anything to do with the client, which will run with as little as 4GB of ram.
Once your datasets go out of the bounds of single reasonable machine, it's time to switch to Apache Spark cluster (or similar).
You can still write your data analysis code in Python, but you get to leverage multiple machines and intelligent compute engine that knows how to distribute your computation across nodes automatically, keeping data linkage and parentage information, so computation is moved closest to where data is located.
You know, sometimes you are in that uncomfortable spot where you have too much data for a single laptop but too little to justify running a whole computing cluster.
That is the kind of spot where you max out everything you can max out and just go take a break when something intensive is running.
This - honestly depending on the task hundreds of GB can be still the "single computer" realm because it's just not worth it to set up a cluster in terms of time and money and also administration overhead. However parallel + out of core computation doesn't necessarily imply a cluster: single-node Spark or something like dask works fine if you're in the python world.
Setting up ad hoc (aka standalone) Spark cluster with a bunch of machines you have control over is ridiculously trivial task though. It's as easy as running spark --master=x where you designate one machine as master. All others started with --master=x become slaves of x. Then you just submit jobs to x and that's all.
Running distributed like that always has a cost, both in inefficiency of the compute and in person-time.
If you still can run on one machine, it's almost always a win. 32Gb is a perfectly reasonable amount of memory to expect. 64Gb isn't outlandish at all for a workstation.
Cloud is an option for really large memory requirements. You can provision machines with nearly 2TB of RAM in AWS, and its pretty cost effective if you only spin them up when you actually need them.
My minikube dev environment (microservices with lots of independent databases) can be crammed into about 24gb of ram and mimic our production environment almost 1:1, there's a number of various databases (couchabse, elastic, redis, rabbit, etc). If a developer is limited to 16GB they have to run a more crammed and far less simliar dev environment. Instead of using one database per service like we do in production we have to cram multiple services into one database (say couchbase with buckets for each service). I can chunk this down to use 13-14GB but if the user has 16GB max that means they're left with 2-3GB of ram for their IDE, Chrome, spotify, etc.
It's severely limited the freedom of range with our dev environment and we're constantly fighting to stay within that 16GB spec. Do we deviate heavily from our staging/prod environments?
What's the sweet spot? A bunch of us have built hackintosh desktops at this point so we can have 32-64+ and more cores so we're not constantly fighting resource contension with all of docker containers we need to run.
It is really irritating that Apple don't offer 256GB iMac Pros with the current 64GB LR-DIMM prices (and it being officially supported in Intel), and even the current XNU kernel can do 252GB.
While not a true solution because you need to break open your computer, the newest iMac Pro has socketed DDR4. iFixIt’s teardown[0] suggests the max it’ll support is 4x32 GB for a total of 128 GB.
Fast SSDs are still orders of magnitude slower than RAM access - SSD latency is on the order of 10s of microsecons, while RAM access latency is on the order of 10s of nanoseconds.
You're in luck because your working set happens to fit into RAM, and the rest is written to swap out gracefully and doesn't pull itself back into memory. But as soon as you're actually working with more than 16GB of data at once, you're in trouble.
Yes, for devs with vms for compiling and testing like my team does, 16gb is not enough. Thus we are moving off macbooks. You can argue that you shouldn't do that on a laptop, you should have a desktop that you always connect to, but that's not how we prefer to do it. If there was no laptop with more than say 8 gig we'd probably have gone that way. 16gb is almost enough.
Because people wanted to take them around. At first, I think the dev system fit in 16 gig macbook pro without any trouble and it was a startup, so it was just natural to say that's all you need. Then the dev env started getting a little bit bigger.
I showed up at the company and noticed you couldn't run the vm for a test and compile separately at the same time. Next I found out I couldn't have a bunch of tabs open while doing dev. So that was painful, and I got a desktop. The existing people who were used to how things worked said you shouldn't have a bunch of tabs open, don't do that and it works fine (oh and don't run any tests while compiling).
Then as memory use kept up even the at most 4 tabs people found they kept running out of memory and they bought a few 32 gig laptops and suddenly things worked again.
A few of us have desktops, most people are struggling with 16 gig notebooks and they started buying 32 gig notebooks for people that want them.
"wanted to take them around" but did they "need" to carry them around.
Where they using them for their own use out side of working hours? not usually a good idea you want to keep your personal device use separate from work equipment.
We all have laptops because:
- most devs are in the on call roster for their team’s systems.
- it’s really handy to be able to bring a machine to a meeting, or to go and sit with people from the team that make the API you’re trying to use.
Some of us work for companies that don't require us to be in the office when we work, so we end up working at coffeeshops, at the beach, home, etc, despite having an office.
Sure we do. Try running multiple VMs or a few Selenium tests and you can see the GBs get eaten by the dozens. Not to mention creative software like Adobe's line of products which are very demanding resource-wise. Load a couple hundred high-res pics on Photoshop and try executing a complicated script and boom your PC gets hammered. It also counts that most laptops cannot feature an advanced GPU so a lot of processing is handled by the CPU and the RAM.
Of course, we can argue how many of these will be executed from a laptop but there are people who use a laptop as their main rig so I guess every possible scenario is on the table.
Our photogrammetry generation machine has 64GB of RAM and this is much MUCH lower than we want. Most of our models crash we try to generate them on 16- and 32-GB RAM machines; the workflow for our target quality/size models has 64GB as an absolute floor.
We'd also like a few TB of VRAM as well, but that's another order of magnitude expense...
Might as well burn more points here - these are all niche cases that should be offloaded onto external machines.
I should have rephrased my original question - is any significant share of the market running into issues? Because everyone acts like this 16GB limit is something a huge chunk of people currently need.
You're asking on HN. Of course we're all going to rush to tell you about our niche cases. In terms of the broader market, you're already a niche case if you're a software developer of any sort.
In my experience it's pretty easy to run up against the 16 GB limit on the MBP if you're running Slack, a browser, a couple of IDEs, and Docker.
Makes sense though. Slack and other electron apps are basically running their own isolated browser instance, so they duplicate all of the baseline memory needs of a browser plus the memory of their actual content.
Eh. To give a comparison, Kate (the KDE default text editor and a decently looking one at that) uses <1M when freshly opened and <100M when a 2 kLOC (~84 KiB) C file is opened (with a decent number of plugins).
Meanwhile, Visual Studio Code uses ~400M when freshly opened and ~550M with the same file (with similar plugins where available). Admittedly, VSC offers far more functionality, but the memory increase is still sizable.
I know that those (Slack and VSC) are vastly different programs with vastly different purposes, but even a minimal Electron app is going to have ~100M baseline memory, which is going to be used again with each and every Electron app that gets launched, in addition to the runtime overhead.
A common response to this is that "RAM is there to be used", but that RAM would have been used anyways for caching (which would have increased overall IO performance across the system) if these apps didn't hog it all. This fact becomes especially relevant when doing tasks that require lots of data (machine learning, compilation, etc).
That being said, I acknowledge that browser runtime based apps make it much easier to develop cross-platform applications, a fact for which I am grateful for as I run Linux. I think that a reasonable solution going forward would be if Electron (or another similar runtime) offered a way for multiple installed apps to share one running application. Ideally, of course, this would be offered natively by the browsers themselves, but given the technical hurdles to doing that _safely_, I'd easily settle for the former.
• Software Development
• 4k+ Video Editing
• High Resolution Image Editing
• 3D CAD
• GIS
• AR/VR
• Data Science
• Machine Learning
• the list goes on…
Probably a niche case, but I regularly analyze data that exceeds 10s of GB, so it helps when that fits on RAM and requires less chunking when analyzing.
Here is a good visualization on access speeds for each level of cache, then main memory. Now take the main memory access time and essentially double it (for your state of the art SSD's today) and that's the access speed.
> With super fast SSDs, does anyone really need 32GB of RAM?
I've used a machine with 8GB of RAM and a swap partition and hard drive cache on the fastest SSD (Intel Optane SSD DC P4800X). Responsiveness still takes a huge hit when processes are actively using more data than fits in RAM.
Fast SSDs can help when you have more RAM than you need but not as much as you'd like, but they don't help when you don't have as much RAM as you need.
"Is anyone seriously running into issues with only 16GB of RAM?"
Yes, all the time. I wouldn't touch even a laptop with less than 32Gb these days, but YMMV with workload. JetBrain IDE's and Slack are a far cry from volumetric image processing or lots of data science loads.
Agreed. My new build (15 months old now) has 32GB ECC. Apps and VMs consume RAM like nobody's business. I'm glad that I built it before memory prices doubled.
That's a silly question. Most people barely use SATA 3 SSDs, and SSDs then to degrade quite a bit after a year. But even the fastest ones still aren't quite a match for RAM.
I don't even shut down my multiple JetBrains IDEs and gazillion browser tabs or bloated Slack when I take a break and pin the CPU with Ableton Live and a bunch of soft synths on my MBP. Nothing skips a beat.
Is anyone seriously running into issues with only 16GB of RAM?