I'm curious what everyone is using hardware-wise for AI experimentation and tinkering. I'm due to replace my desktop hardware and while I typically go with Apple hardware, I'm wondering whether a Mac Mini M2 Pro or a Mac Studio with M2 Max would still pale in comparison to a Windows based box with an NVIDIA GPU.
> I'm wondering whether a Mac Mini M2 Pro or a Mac Studio with M2 Max would still pale in comparison to a Windows based box with an NVIDIA GPU.
Mostly it will. The one advantage a M2 MAX has is with really large LLM inference with llama.cpp, as you have to compromise quality severely to fit 70B in 24GB of RAM.... but do anything else, especially finetuning, and you want desktop GPUs with as much VRAM as you can get.
I just built a 10-liter SFF desktop in a Node 202 with an RTX 3090 for ~2k, and couldn't be more pleased.
While Macs, especially the Mac Studio with M2 Max, offer impressive performance, NVIDIA GPUs on Windows-based machines are still the go-to for many AI tasks due to CUDA support and compatibility with popular AI frameworks.
However, Apple's Silicon chips are making strides in this field. It ultimately depends on your specific needs and preferences. Consider your software ecosystem, budget, and whether you need portability.
Both options have their strengths, so research thoroughly to make the best choice for your AI projects.
In case you have an AI project and want to make it real, feel free to contact us! https://www.ratherlabs.com
What kind of experimentation are you doing? Also, cloud GPUs are often quite cheap, so if you're going to have low utilization rates, that's an option to consider.
Mostly it will. The one advantage a M2 MAX has is with really large LLM inference with llama.cpp, as you have to compromise quality severely to fit 70B in 24GB of RAM.... but do anything else, especially finetuning, and you want desktop GPUs with as much VRAM as you can get.
I just built a 10-liter SFF desktop in a Node 202 with an RTX 3090 for ~2k, and couldn't be more pleased.