I was skeptical of the "AI" but it has 64gb of RAM and the Ryzen AI 9 HX 37 CPU, supposedly has good LLM performance [1]. So yea, its an interesting device.
That sounds like 'running on CPU considered to be competitive to running on GPU' and my BS alarms are going off...
...buuuuut, to be fair, I don't either a) particularly care, or b) really have that much knowledge about the Ryzen AI 9 HX 37 CPU.
Anyone care to point me vaguely in the direction of something meaningful that would suggest that CPU based inference would, in general, be even remotely comparable to dedicated GPU performance for AI?
I remain pretty skeptical.
(That reddit thread kicks off with someone saying 'LLM isn't particularly compute intensive workload', which is obviously a) false, and b) gives me no confidence in their review. If LLMs were just memory intensive, we would all just be getting machines with more RAM and no one would be going crazy with 4 parallel 24GB GPUs just to eek out the performance for inference; they'd just slap a 64GB sim in their machine and be done with it).
LLM inference it is memory intensive, but it IS compute intensive too.
Being able to run on a CPU at 1 token per century is not what anyone wants.
> GPD Pocket 4 uses LPDDR5x memory with a speed of 7500MT/s, available in 16GB or 32GB or 64GB capacities. It can allocate up to 16GB of memory to the GPU, allowing AI applications that require large amounts of VRAM to perform optimally.
This looks neat, but it looks like it's trying to be everything to everyone, and while that's always tempting for a new product, I think it's usually doomed to failure.
Why does it support an RS232 port, especially on such a small form factor? Why does it focus so much on local LLM support? Why has it got such a high resolution display? Why has it got swappable ports?
Each one of those features makes sense in isolation for some market, but together I'm not sure there's anyone who's the target market for all of them, and because of that there are likely to be better options for each target market. Want to run an LLM? A Mac is going to do that much better with its unified memory and ML acceleration. Want to do sysadmin stuff plugged into an old switch? You probably already have an old Thinkpad for that. etc.
Thank you for the input, now you mention it I have heard of GPD many years ago, I didn't realise this was an existing product line. Sorry for jumping to conclusions here.
"Doomed to failure" was too strong in hindsight, it seems GPD have found their niche. It does however sound like it is quite a niche. This product does have a bunch of tradeoffs that work for that niche, but that make it unlikely to be suitable for a mainstream audience.
A product suitable for a mainstream audience is a laptop with a full sized keyboard and a 14" to 17" screen. There are already an entire market of those. I don't really understand the critique.
This is not GPD's first laptop in this size and design.
Yes, many people who buy laptops like this need a physical rs232 port and a kvm port.
Yes, you could buy an old thinkpad, and it won't fit in a pocket, you could buy sometehing small, and it'll be slow, or you could buy this and get both.
These and other related machines are super popular among people interested in handheld gaming. I agree that the AI label is just for hype, but GPD does make good machines.
It's an interesting idea. GPD makes machines that will appeal to those interested in handheld gaming and those on pager duty.
I own devices from GPD, and I have had two issues -- devices are simply not reliable, in just over a year the screen showed a green verticle line, and the display died a few months later. Secondly, the keyboard is simply unusable if you plan to program on it.
Regardless of the success of this crowdfunding round, I bet they still provide supporting downloads of drivers & executables via zip files served from Google drive.
[1] https://www.reddit.com/r/Amd/comments/1gfr60l/ryzen_ai_300_t...
reply