Hacker News new | past | comments | ask | show | jobs | submit login

Ideally, you're right. Realistically, Apple has to choose between using their powerful silicon (the GPU) for high-quality results or their weaker silicon (the Neural engine) for lower-power inference. Devices that are designed around a single power profile (eg. desktop GPUs) can integrate the AI logic into the GPU and have both high-quality and high-speed inference. iPhones gotta choose one or the other.

There's not nothing you can run on that Neural Engine, but it's absolutely being misunderstood relative to the AI applications people are excited for today. Again; if chucking a few TOPS of optimized AI compute onto a mobile chipset is all we needed, then everyone would be running float16 Llama on their smartphone already. Very clearly, something must change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: