> In the real world, your application is running in a container among hundreds or thousands of other containers
I mean, that’s an engineering decision too. In my day job we’re capturing, pre-processing, running inference on, and post-processing about 500Mpx/s worth of live image data at about 80ms/frame end-to-end at the edge. The processor SoM costs about $3000/unit and uses about 50W running flat out. The retail cost of our overall product is two orders of magnitude more than what the processor is worth but it incurs zero recurring costs for us.
Edit: and it’s got 64GB of Unified RAM that I’ve got all to myself :)
I was wondering if someone from a different sub-industry would disagree here :D
That sounds like a very interesting job, with quite different requirements and constraints from what I'm used to. One day, I'll get a job where application latency is critical, and optimizations matter deeply. Undoubtedly I'll find something else to be upset about, but at least it'll be a new complaint.
I mean, that’s an engineering decision too. In my day job we’re capturing, pre-processing, running inference on, and post-processing about 500Mpx/s worth of live image data at about 80ms/frame end-to-end at the edge. The processor SoM costs about $3000/unit and uses about 50W running flat out. The retail cost of our overall product is two orders of magnitude more than what the processor is worth but it incurs zero recurring costs for us.
Edit: and it’s got 64GB of Unified RAM that I’ve got all to myself :)