Hacker News new | past | comments | ask | show | jobs | submit login

This is fantastic news. Hopefully the cost to manufacturers is only marginal and they find a suitable replacement for their current "each tier in RAM comes with a 5-20% price bump" pricing scheme.

Too bad apple is almost guaranteed to not adopt the standard. I miss being able to upgrade the ram in macbooks.




> Too bad apple is almost guaranteed to not adopt the standard.

Apple would require multiple LPCAMM2 modules to provide the bus width necessary for their chips. Up to 4 x LPCAMM2 modules depending on the processor.

The size of each LPCAMM2 module is almost as big as the entire size of an Apple CPU combined with the unified RAM chips, so putting 2-4 LPCAMM2 modules on the board is completely infeasible without significantly increasing the size of the laptop.

Remember, the Apple architecture is a combined CPU/GPU architecture and has memory bandwidth to match. It's closer to your GPU than the CPU in your non-Mac machine. Asking to have upgradeable RAM on Apple laptops is akin to almost like asking for upgradeable RAM on your GPU (which would not be cheap or easy)

For every 1 person who thinks they'd want a bigger MacBook Pro if it enabled memory upgrades, there are many, many more people who would gladly take the smaller size of the integrated solution we have today.


> like asking for upgradeable RAM on your GPU

Can I please have upgradeable RAM on GPU? Pwetty pwease?


Sure, as long as you're willing to pay in cost, size, and performance.


> Up to 4 x LPCAMM2 modules depending on the processor.

The non-Pro/Max versions (e.g. M3) uses 128-bits, and arguably is the kind of notebook that mostly needs to be upgraded later since they commonly come with only 8GB of RAM.

Even the Pro versions (e.g. M3 Pro) use up-to 256-bits, that would be 2 x LPCAMM2 modules, that seem plausible.

For the M3 Max in the Macbook Pro, yes, 4 x LPCAMM2 would be impossible (probably). But I think you could have something like the Mac Studio have them, that is arguably also the kind of device that you probably want to increase memory in the future.


It would only need to be 2x per board side.


Apple ships 128 bit, 256 bit, and 512 bit wide memory interfaces on laptops (up to 1024 bit wide on desktops).

Is it feasible to fit memory bandwidth like the M3 Max (512 bits wide LPDDR5-6400) with LPCAMM2 in a thin/light laptop?


This PDF[1] suggests that an LPCAMM2 module has a 128 bit wide memory interface, so the epic memory bandwidth of the M3 max won’t be achievable with one of these memory modules. High end devices could potentially have two or more of them arranged around the CPU though?

[1] https://investors.micron.com/node/47186/pdf


Apple could just make lower tier macbooks but mac fanboys wouldnt be able to ask “but what about apples quarterly profits?”

Most macbooks dont need high memory bandwidth, most users are using their macs for word processing, excel and vscode.


As a non Mac reference, I work on a HP laptop from 2014. It was a high end laptop by then. It's between 300 and 600 Euro refurbished now.

I expanded it to 32 GB RAM, 3 TB SSD but it's still a i7 4xxx with 1666 MHz RAM. And yet it's OK for Ruby, Python, Node, PostgreSQL, docker. I don't feel the need to upgrade. I will when I'll get a major failure and no spare parts to fix it.

So yes, low end Macs are probably good for nearly everything.


Even low end gaming, simulations, and even fun webGL toys can require a fair amount of memory bandwidth with an iGPU, like apple's M series. It also helps quite a bit for inference. I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.


> I MBP with a M3 max can run models requiring multiple GPUs on a desktop and still get decent perf for single users.

Good for your niche case, the other 99.8% still only does web and low performance desktop applications (which includes IDEs)


Yes but Apple’s trying to build an ecosystem where users get highly quality, offline, low latency AI computed on their device. Today there’s not much of that. And I don’t think they even really know what’s going to justify all of that silicon in the neural engine and the memory bandwidth.

Imagine 5 years from now people have built whole stacks on that foundation. And then competing laptops need to ship that compute to the cloud, with all of the unsolvable problems that come with that. Privacy, service costs (ads?), latency, reliability.


Apple is also deliberately avoiding having “celeron” type products in their lineup because those ultimately mar the brand’s image due to being kinda crap, even if they’re technically adequate for the tasks they’re used for.

They instead position midrange products from 1-2 gens ago as their entry level which isn’t quite as cheap but is usually also much more pleasant to use than the usual bargain basement stuff.


For 512 bits you would need four LPCAMM2s. I could imagine putting two on opposite sides of the SoC but four might require a huge motherboard.


Perhaps future LPCAMM generations will require more bits? I still can't imagine apple using them unless required by right to repair laws. But those laws probably don't extend to making RAM upgradeable.


Apple does this because their CPU and GPU use the same memory, and it's generally the GPU that benefits from more memory bandwidth. Whereas in a PC optimized for GPU work you'd have a discrete GPU that has its own memory which is even faster than that.


Hoping we see AMD Strix Halo with it's 256-bit interface crammed into an aggressively cooled fairly-thin fairly-light. But it's going to require heavy cooling to make full use of.

Heck, make it only run full tilt when on an active cooling dock. Let it run half power when unassisted.


Kinda hilarious to see gamers buying laptops that can't actually leave the house in any practical meaningful way. I feel like some of them would be better off with SFF PCs and the external monitors they already use. I guess the biggest appeal I've seen is the ability to fold up the gaming laptop and put the dock away to get it off the desk, but then moving to an SFF on the ground plus a wireless gaming keyboard and wireless mouse that they already use with the normal laptop + one of those compact "portable" monitors seems like it'd solve the same problem.


My wife can get an hour of gaming out of her gaming laptop. They're good for being able to game in an area of the house where the rest of the family is, even if that means being plugged in at the dining table. Our home office isn't close enough.

Also a gaming laptop is handy if you want to travel and game at your hotel.


I’ve been wondering for a while now why ASUS or some other gaming laptop manufacturer doesn’t take one of their flagship gaming laptop motherboards, put some beefy but quiet cooling on it, put it in a pizza-box/console enclosure, and sell it as a silent compact gaming desktop.

A machine like that could still be relatively small but still be dramatically better cooled than even the thickest laptop due to not having to make space for a battery, keyboard, etc.


ZOTAC does these - there are ZBOX Magnus with laptop-grade RTX 4000 series GPUs in 2-3 liter chassis. However their performance and acoustics are rather.. compromised, compared to a proper SFF desktop (which can be built in ~3x the volume)


Yeah, those look like they’re too small to be reasonably cooled. What I had in mind is shaped like the main body of a laptop but maybe 2-3x as thick (to be able to fit plenty of heatsink and proper 120/140mm fans), stood up on its side.


Unified memory is basically L3 cache speed with zero copy between CPU and GPU.

They have engineering difference. Depends on who you ask, it may or may not worth it


Assuming you mean latency, Apple's unified memory isn't lower latency than other soldered or socketed solutions e.g. M1 Max with 111ns latency on cache miss vs 13900k with 93ns latency. Certainly not L3 level latency. Zero copy between CPU/GPU is great but not unique to unified memory or soldered ram.

As far as bandwidth goes, you would only need one or two LPCAMM2 modules to match or exceed the bandwidth of non-Max M series chips. Accommodating Max chips in a macbook with LPCAMM2 would definitely be a difficult packaging problem.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...

https://www.anandtech.com/show/17047/the-intel-12th-gen-core...


and they wont so long as people buy regardless


Given enough pressure ...


You mean pressure from regulators, surely. Because 99% of consumers will not notice or know the difference in a spec sheet.


They will maliciously comply. They might even have 4 sockets for the 512-bit wide systems. But then they’ll keep the SSD devices soldered - just like they’ve done for a long time. Or cover them with epoxy, or rig it with explosives. That’ll show you for trying to upgrade! How dare you ruin the beautiful fat profit margin that our MBAs worked so hard to design in?!?


Apple lines perimeter of the nand chips on modern mac minis with an array of tiny capacitors, so even the crazy people with heater boards can’t unsolder the nand and replace them with higher density NAND.


Have you not looked at the NAND packages on any regular SSDs? Tiny decoupling caps alongside the NAND is pretty standard practice.


This is normal. They are called decoupling capacitors and are there to provide energy if the SSD requires short bursts of it. If you put them any further away the bit of wire between them and the gate turns into an inductor and has some somewhat undesirable characteristics.

Also replacing them is not rocket science. I reckon I could do one fine (used to do rework). The software side is the bugbear.


This is hyperbole. They are replaceable. It's just more difficult.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: