Hacker News new | past | comments | ask | show | jobs | submit login

> I did acknowledge that floating point numbers are faster

Yes you did. And then you said it’s their only advantage. It’s not the only one.

For example, in some environments, dynamic memory is undesirable. Especially since you mentioned physical sensors: many real-life sensors are not directly connected to PCs but instead are handled by MCU chips.

> I'm not talking about symbolic computation.

The numbers you’re talking about are pretty close to symbolics in many practical aspects. From Wikipedia article “Computable number”:

equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus

I didn't mention memory use and dynamic allocation because I thought it was obvious. I filed it under "faster".

No, it's not just "faster", it makes the difference between possible and impossible. With a physical sensor connected to a small MCU, either you do floats, or you do nothing at all. There's simply no way for a machine like that to do math the way you're thinking.

I see this all the time with developers who have zero embedded experience like you: you don't seem to understand that in many embedded environments, dynamic memory allocation is simply not possible. You allocate all memory at the start of the program, and that's it. So doing math the way you suggest is completely impossible.

My point about floating point numbers having no advantages other than speed and memory usage was referring to the following: Some people think that you can decide equality of floating point numbers, while you can't decide equality of real numbers, but this advantage is an illusion because rounding errors can make numbers that should equal each other not equal each other. So ultimately it ends up being a pure efficiency vs accuracy trade-off, where efficiency includes things like speed and memory, and of course that could make it impractical in an embedded setting. I can't list out every single disadvantage of something being slower and less space-efficient, including in your pet interest, which is embedded programming. Is that not politically correct?

And ultimately, I'm interested in beautiful things as well as practical things. I fully acknowledged that the idea here is impractical, though theoretically possible, with some very interesting theorems coming out of it. I don't want to always think like a hard-nosed practitioner. That's stupidly dogmatic.


Const-me is complaining that I didn't mention his pet interest, which is GPGPU. And then he said some totally irrelevant and obvious stuff (to anyone who knows CS), just like you did. This is why we can't have nice things.

GPGPUs ain’t exactly my interest, they, and embedded too, are just parts of my job.

I’ve been programming for living for couple decades now, and I like writing efficient code. Not all ideas, however beautiful they might seem to some people, are good enough for actual software that people use to accomplish stuff. This idea is one of them. Spending thousands of CPU cycles to compute a=b*c is extremely inefficient in vast majority of cases. Even a hardware-based implementation won’t be efficient, see what happened to LISP machines.

The beauty is in the eye of the beholder and is therefore highly subjective.

Apparently, that thing that you find beautiful, I find impractical and therefore only applicable to narrow set of problems and environments.

Speaking of environments, even on PC, I code for GPGPU as well. Due to extreme levels of parallelism, GPU-running code has almost no support for dynamic memory: there’re append/consume buffers there but they’re very limited and not actually dynamic, just a static buffer with thread-safe cursor over them.

Also sometimes, you can’t do dynamic memory allocations because it’s explicitly forbidden by law: DO-178B, IEC 62304, etc.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact