Hacker News new | past | comments | ask | show | jobs | submit login

This is a pet subject of mine since as an FPGA developer a huge amount of my time is spent focusing on bit-precision and Fixed-Point and reduced precision floating point implementations of algorithms. In fact one of the interview questions I ask focuses on how to do different mathematical operations in floating point using only logical/integer operations.

On the one hand I find the results unsurprising - even people I know to have worked a lot on numerics often have only a rudimental understanding of the intricacies of corner case behaviour for floating point, and yes that absolutely manifests in hitting a wall when something curious goes wrong. Mostly this results in head banging until you find the piece of code going wrong is the numerical piece and then very quickly you start looking at possible floating point gremlins.

Having said that though this paper seems to have a very academic view of what HPC is. Even for people designing HPC systems numerical optimizations are rarely a huge chunk of their job so it's probably not actually that important - I think the fact few people have a good understanding is a reflection of the fact it's not often necessary.

Finally while we're on the topic: Does anyone know a good tool that can allow me to write an equation, specific the precision of inputs/outputs and get the required precision of all the operators? I know Matlab have the DSP toolbox but it has some serious limitations, I'm still in search of something fantastic.




Herbie is not an exact match for your need it might be useful nevertheless : https://herbie.uwplse.org/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: