Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Try to use 128GB M3 Max to run LLaMA2 70B inference and enjoy the sound of airplanes taking off while your battery is dead in 1 hour...


But at least it runs fast enough, and it's portable too.

To run the same llm on other laptops is basically impossible without an external graphics card, or you have to give up the laptop for a desktop. And that still sounds like an airplane taking off.


But that's argument absurdum considering the price. Yes, it can do that, but you can also do that much better with a cheaper combo that gets you more actual hardware...

Which is exactly what the OP is saying. And I agree, laptops are generally poor value no matter the brand/side. But you can mitigate that by buying a good enough not too expensive notebook and associate a "real" desktop computer with it.

At curent Apple price, the laptop convenience factor for almost equivalent workstation is not really worth it for the vast majority of peoples (at least those who have to point with their own money...).


Fair point, my Zephyrus G14 4090 64+16GB has the same issues, it can run smaller LLaMA2 models much faster than 128GB M3 Max (3x) but 70B one is much slower due to CPU doing half of the layers (10x). So in the end I ended up with both.


Buying any laptop just for running LLMs may be bad choice. Maxed up spot EC2 instances could be good I heard




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: