Hacker News new | past | comments | ask | show | jobs | submit login

> floating point numbers don't really have any advantages over exact real numbers.

With floating point numbers, you can solve quadratic equations, calculate sin/cos, calculate log/exp. These things have lots of applications.




I never said that they don't have applications. And you can do all those things with real numbers. What's your point?


> you can do all those things with real numbers.

No you can't.

> What's your point?

Floats really have lots of advantages over exact real numbers. Being able to compute transcendental functions is one of them. There're others.

They're freaking fast, my cheap laptop with integrated GPU can do almost 1 TFlops. They're fixed size i.e. you can handle them without dynamic memory allocations.


I think you may be misunderstanding what the other guy means by exact real numbers: not rationals (for which indeed you can't do things like square roots) but real numbers expressed, say, as functions that deliver arbitrarily close rational approximations. With these you can compute e.g. square roots just fine.

They have disadvantages of their own, though: for instance, y if two of these are equal then in general you can't determine that they are in finite time.


> if two of these are equal then in general you can't determine that they are in finite time.

This is arguably a problem with floating point as well. Two computations F and G which ought to produce the same result can end up differing because of rounding errors. So equality testing is not advisable to do over floating point. Also, inequality of real numbers is still semi-decidable, which is useful.

People keep bringing up these problems around not being able to compute discontinuous functions, but those problems still manifest themselves over the floating point numbers, in the sense that a small rounding error in the input can drastically affect the output. This obviously isn't a problem over the rational numbers, but a lot of operations like square-root are not possible over the rational numbers.


Symbolic computations are even slower than rational numbers. If you only need a couple numbers, you’ll probably be OK, but if you need a billion of them, you’ll have problems.

While both rationals and symbolics have their use, saying that floats don’t have any advantages over them is just wrong.


You're claiming I said things I didn't say.

Two things:

1. I did acknowledge that floating point numbers are faster. I expressly stated that they do have some advantages over real numbers.

2. I'm not talking about symbolic computation.


> I did acknowledge that floating point numbers are faster

Yes you did. And then you said it’s their only advantage. It’s not the only one.

For example, in some environments, dynamic memory is undesirable. Especially since you mentioned physical sensors: many real-life sensors are not directly connected to PCs but instead are handled by MCU chips.

> I'm not talking about symbolic computation.

The numbers you’re talking about are pretty close to symbolics in many practical aspects. From Wikipedia article “Computable number”:

equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus


I didn't mention memory use and dynamic allocation because I thought it was obvious. I filed it under "faster".


No, it's not just "faster", it makes the difference between possible and impossible. With a physical sensor connected to a small MCU, either you do floats, or you do nothing at all. There's simply no way for a machine like that to do math the way you're thinking.

I see this all the time with developers who have zero embedded experience like you: you don't seem to understand that in many embedded environments, dynamic memory allocation is simply not possible. You allocate all memory at the start of the program, and that's it. So doing math the way you suggest is completely impossible.


My point about floating point numbers having no advantages other than speed and memory usage was referring to the following: Some people think that you can decide equality of floating point numbers, while you can't decide equality of real numbers, but this advantage is an illusion because rounding errors can make numbers that should equal each other not equal each other. So ultimately it ends up being a pure efficiency vs accuracy trade-off, where efficiency includes things like speed and memory, and of course that could make it impractical in an embedded setting. I can't list out every single disadvantage of something being slower and less space-efficient, including in your pet interest, which is embedded programming. Is that not politically correct?

And ultimately, I'm interested in beautiful things as well as practical things. I fully acknowledged that the idea here is impractical, though theoretically possible, with some very interesting theorems coming out of it. I don't want to always think like a hard-nosed practitioner. That's stupidly dogmatic.

[edit]

Const-me is complaining that I didn't mention his pet interest, which is GPGPU. And then he said some totally irrelevant and obvious stuff (to anyone who knows CS), just like you did. This is why we can't have nice things.


GPGPUs ain’t exactly my interest, they, and embedded too, are just parts of my job.

I’ve been programming for living for couple decades now, and I like writing efficient code. Not all ideas, however beautiful they might seem to some people, are good enough for actual software that people use to accomplish stuff. This idea is one of them. Spending thousands of CPU cycles to compute a=b*c is extremely inefficient in vast majority of cases. Even a hardware-based implementation won’t be efficient, see what happened to LISP machines.


The beauty is in the eye of the beholder and is therefore highly subjective.

Apparently, that thing that you find beautiful, I find impractical and therefore only applicable to narrow set of problems and environments.

Speaking of environments, even on PC, I code for GPGPU as well. Due to extreme levels of parallelism, GPU-running code has almost no support for dynamic memory: there’re append/consume buffers there but they’re very limited and not actually dynamic, just a static buffer with thread-safe cursor over them.


Also sometimes, you can’t do dynamic memory allocations because it’s explicitly forbidden by law: DO-178B, IEC 62304, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: