Hacker News new | past | comments | ask | show | jobs | submit login

An extremely rough estimate, FFT (prolly the most expensive one of what you mentioned) needs 5N*Log2(N) operations.

If you have 1M source floats and want 60 FPS, translates to only 6 GFlops.

On Pi4, on paper the GPU can do 32 GFlops. Again on paper, the CPU can do 8 FLOPs/cycle which translates (4 cores at 1.5 GHz) to 48 GFlops. That’s assuming you know what you’re doing, writing manually-vectorized C++ http://const.me/articles/simd/NEON.pdf abusing FMA, using OpenMP or similar for parallelism, and have a heat sink and ideally a fan.

So you’re probably good with both CPU and GPU. Personally, I would have started with NEON for that. C++ compilers have really good support for a decade now. These Vulkan drivers are brand new, and GLES 3.1 which added GPGPU is not much older, I would expect bugs in both compiler and runtime, these can get very expensive to workaround.

While I don’t have any experience with Jetson, on paper it’s awesome, with 472 GFlops. Despite the community is way smaller, nVidia is doing better job supplying libraries, CUDA toolkit has lots of good stuff, see e.g. cuFFT piece (I did use CUDA, cuFFT and other parts, just not on Jetson).




It still gives me a giggle that the flops numbers you're talking about were supercomputer-level when I was a kid, and now I can buy that kind of power with beer money and lose it in the back of a drawer.


On the other hand, it’s sad how we failed the software.

We have devices capable of many GFlops in our pockets and many TFlops on our desks, yet we pay by hours to use computers operated by companies like Amazon or Microsoft.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: