A PhD student is very different from a professional developer with years of real-world experience in industry. It's a completely different job.
The title is highly misleading.
A "developer" is just someone who writes software and not necessarily a "professional developer" with years of experience in a particular kind of job like what you have in mind.
Moreover, I would even expect that the vast majority of "professional software engineers" who make a living strictly writing software without any particular domain in focus, might actually fare worse in floating point gotchas than scientific domain workers who are "software engineering dilettantes" but nonetheless do computation as their job.
>Perhaps not surprisingly, the most predictive factor is simply Contributed Codebase Size, the effect of which is shown in Figure 16.
>There is a gain of only about 2/15 compared to those who reported a codebase where floating point was not intrinsic or where they were not involved.
The title would be better suited if they included a higher number of HPC/ML/etc software engineers (post PhD) that work on larger scale projects in industry/labs.
Specifically, I disagree with this assessment:
>We believe that the combination of our recruitment process and the resulting background of the participants illustrated here suggest that our sample is a good representative of software developers who write code for, and in support of, science and engineering applications.
That's just outright ignoring all science and engineering applications outside of academia/academic research.
> The title would be better suited if they included a higher number of HPC/ML/etc software engineers (post PhD) that work on larger scale projects in industry/labs.
Their premise is that if folks in academia are having a hard time with it, then perhaps the training and tooling around floating point needs some reconsideration. That seems very reasonable to me.
There's been 50+ years of this reconsideration. Some problems are simply hard, and perhaps nothing will make it easier than to simply learn the nuances.
For some domains, there are some solutions, like BigInt to avoid overflow, but each comes with other tradeoffs.
For general floating-point, it's mathematically quite likely there is no better tradeoff. You cannot stuff infinite values into finite space.
What % of scientific code do you think is written outside the areas this paper surveyed? Having been on both sides, I find that the majority of numerical code, especially anything more complex than simple math (which is almost always written poorly outside academic areas), is written/developed by academics, and the results adopted as libs in engineering.