
The Floating-Point Guide (2010) - arunc
https://floating-point-gui.de/
======
roland35
You have to be very careful using floating point on embedded processors not
only for the reasons in this website, but for speed too! Unless you have a
separate floating-point core (ARM Cortex M4s have one, M3s do not) it can take
1,000's of clock cycles to perform any floating point math like addition,
multiplication, etc.

Even with a floating point processor you need to be careful when switching
into and out of interrupts! The floating point core has it's own set of
registers you need to keep track of. Doubles are even worse!

Basically it is almost always better to use integers when you can, and be
aware of what types you are using (always important in C anyways).

~~~
petermcneeley
Something seems off about your numbers given that you can emulate fp with
integer ops in much less than 1000 instructions.

~~~
gpderetta
probably because on cpu without fp units, fp instructions cause invalid
instruction exceptions to be raised that need to handled by the kernel, and
that can be very expensive (that used to be a thing on x86 in the past).

~~~
Symmetry
Kernel? What kernel? You probably aren't using a separate kernel with an M3,
though you might have an RTOS. You'll, as a rule, have the compiler putting
the floating point emulations in directly. I think the performance hit is only
around a factor of 100 or so for an M3 though.

------
emptybits
FWIW and for all its other shortcomings, Raku (nee Perl 6) handles common
arithmetic in a manner that wouldn't surprise a mathematician. Or non-
mathematician, for that matter. Witness:

    
    
        $ python3 -c 'print(0.3 == 0.1 + 0.2)'
        False
        
        $ ruby -e 'puts 0.3 == 0.1 + 0.2'
        false
        
        $ perl6 -e 'say 0.3 == 0.1 + 0.2'
        True

~~~
enriquto
As a mathematician, I am actually very surprised by perl's 6 behavior here.
What the hell is going on? Does it use fixed point arithmetic or what?

~~~
dragonwriter
It uses exact rational arithmetic for values that it can represent that way,
preferring correctness to efficiency. Scheme and some other languages do the
same thing.

Using imprecise but efficient-to-calculate-with-numbers for exact literals by
default is probably the most pervasive premature optimization in all of
computing.

~~~
enriquto
> It uses exact rational arithmetic for values that it can represent that way,
> preferring correctness to efficiency.

This is _exactly_ the same situation as with floating point numbers. In both
cases you have a predetermined finite set of rational numbers, with exact
rational arithmetic when you stay within those numbers, and deterministic
rules when your operation exits this finite set.

> Using imprecise but efficient-to-calculate-with-numbers for exact literals
> by default is probably the most pervasive premature optimization in all of
> computing.

I do not see how this is a matter of precision nor efficiency. Floating point
arithmetic was deemed more useful for general purpose for good reasons: the
representable numbers are mostly scale-free, so that you do not care about the
absolute size of your numbers (you can compute in armstrongs or in parsecs and
obtain essentially the same results). With rational arithmetic using bounded
integers, you cannot represent very large or very small numbers. On the other
hand you can represent small fractions like 1/3, which is arguably useful in
some cases, but not really a big deal in practice. There's no reason why
rational arithmetic with bounded numerator and denominator could not be
efficiently implemented in hardware as fast as floating point; I do not
understand your point.

~~~
saithound
It is not at all the same situation as with floating point numbers.

1\. It does not use a predetermined finite set of rational numbers. It uses a
bigint numerator and a 64-bit denominator.

2\. Even the case of bounded numerator and bounded denominator would preserve
more of the nice mathematical properties of rational arithmetic than IEEE 754
floating-point arithmetic does (e.g. the associativity of addition).

3\. While the latter could be implemented in hardware, it is not implemented
in hardware. The defaults for literals prefer the choice that is implemented
in hardware, IEEE 754 floating-point. This is a premature optimization.

~~~
enriquto
> 1\. It does not use a predetermined finite set of rational numbers. It uses
> a bigint numerator and a 64-bit denominator.

This is rather ugly for it is not closed by inversion.

> 2\. Even the case of bounded numerator and bounded denominator would
> preserve more of the nice mathematical properties of rational arithmetic
> than IEEE 754 floating-point arithmetic does (e.g. the associativity of
> addition).

I do not see how this can possibly be the case. How do you define rational
arithmetic with bounded denominators so that addition is associative?

> 3\. While the latter could be implemented in hardware, it is not implemented
> in hardware. The defaults for literals prefer the choice that is implemented
> in hardware, IEEE 754 floating-point. This is a premature optimization.

Alright, but this view is rather subjective, and only valid if you find
rational arithmetic more natural than floating point (which many people do
not). Regardless of efficiency, using rational arithmetic with bounded
integers in numerical computing would be extremely unnatural to most analysts,
they would always need to "normalize" the computations so that all the numbers
do not become too small, and a lot of ugly tricks that are not needed in
floating point.

Besides some trivial decimal arithmetic (that can be easily implemented in
fixed point for the common use case of counting money), I do not really see
the point of the rational representation with bounded ints. Of course, when
the denominator is allowed to be a bigint, this is very useful in math, but
you'll agree this is a completely different context.

~~~
saithound
> I do not see how this can possibly be the case. How do you define rational
> arithmetic with bounded denominators so that addition is associative?

There's only one mathematically natural choiceIn the case of bounded numerator
and bounded denominator. You simply define (an/ad) + (bn/bd) = (bd × an + ad ×
bn) / (bd × ad) where the + on the right hand side is ordinary twos complement
addition (and the equality test for an/ad and bn/bd is ad × bn == bd × an,
which also naturally accounts for the zero-denominator-due-to-zero-divisors
cases; alternatively, you can put everything in lowest terms, but there's no
point).

> Regardless of efficiency, using rational arithmetic with bounded integers in
> numerical computing would be extremely unnatural to most analysts

Floating-point was invented by numerical analysts for numerical analysts. No
wonder they find it most natural. The overwhelming majority software
developers are not numerical analysts, and most programming languages do not
target numerical analysts. Nobody says that numerical analysts should not use
floats (except maybe the unum guy), we're arguing about defaults in languages
that explicitly do not have numerical analysts among their core target
audience.

~~~
enriquto
>> I do not see how this can possibly be the case. How do you define rational
arithmetic with bounded denominators so that addition is associative?

> That's easy. In the case of bounded numerator and bounded denominator, you
> simply define (an/ad) + (bn/bd) = (bd x an + ad x bn) / (bd x ad)

This definition is not complete. What happens when "bd x ad" is larger than
the maximum allowed denominator ?

> The overwhelming majority software developers are not numerical analysts,

the overwhelming majority of young people who learn to program today do
machine learning, which is based on mungling huge arrays of floating point
numbers. Tell them to use rationals if you dare!

~~~
saithound
> This definition is not complete. What happens when "bd x ad" is larger than
> the maximum allowed denominator ?

All integer operations are twos complement operations, as I'm sure you guessed
anyway.

> the overwhelming majority of young people who learn to program today do
> machine learning, which is based on mungling huge arrays of floating point
> numbers. Tell them to use rationals if you dare!

That's very far from factually true, but this kind of argument is not relevant
to the Raku defaults in any case. A machine learning library can use whatever
optimized number representation its creators wish to use. In fact, the default
choice of most language implementors (double-precision floating point) is
typically not the representation used in training or inference on deep
learning models anyway. The 1080Ti is fast enough only with single-precision
floats.

------
brazzy
Discussion when I created the site and first submitted it to HN:
[https://news.ycombinator.com/item?id=1257610](https://news.ycombinator.com/item?id=1257610)

------
IEEE754
FWIW, their "basic answer" page is the simplest i've seen that neither lies or
omits critical factors in the 0.1 + 0.2 problem. It's probably a good starting
point that induces some lingering questions tempting you to find out more.

If you want a thorough understanding you will want to look at representation-
error, rounding-error, error-propagation, why they exist and how they
interact.

The interplay between those three forms of numerical error in floating point
numbers will also allow you to more easily see the world of limitations of fp
beyond 0.1 + 0.2 for yourself.

~~~
ken
I understand the "problem" from the hardware perspective, but I still don't
accept their "basic answer" as reasonable.

> It’s not stupid, just different.

Over the past 50 years, my computer has adapted to how humans normally operate
in nearly every other way. Why do they continue to use this system which
produces results different from what any normal person expects?

> Computers use binary numbers because they’re faster at dealing with those

Computers are faster at dealing with all-caps ASCII, too, but we've accepted
here that micro-optimization is less important than _doing what people want_.
Most of the languages I use have even moved past _fixnums_. Why have we not
improved real arithmetic since 1985?

~~~
msla
> Over the past 50 years, my computer has adapted to how humans normally
> operate in nearly every other way. Why do they continue to use this system
> which produces results different from what any normal person expects?

Lisps and Lisp-derived languages, like Scheme, have had a proper numerical
tower, including rationals, for decades now. Using reals is optional, but
using rationals and everything else imposes an efficiency cost, so people make
their decision. Implementing rationals in hardware would not necessarily make
them more efficient; that is, if you think having rational support in hardware
would help, you have to make the case. It isn't an automatic win:

[https://yosefk.com/blog/its-done-in-hardware-so-its-
cheap.ht...](https://yosefk.com/blog/its-done-in-hardware-so-its-cheap.html)

~~~
ken
The numerical tower is one of the features of Lisp that the Algol descendants
have not yet stolen, and I don’t know why not.

“Efficiency” is tough to believe given all the other inefficient yet nice
features that have been universally adopted, like Unicode, variable length
lists, bigints, etc. In many dynamic languages, every method call is a hash
table lookup, yet we’re expected to believe they don’t use Decimals by default
because it would be too slow? In C++ I’d buy that excuse.

------
giu
A very useful resource that I've used quite a few times in the past for
comparing floats: [https://floating-point-
gui.de/errors/comparison/](https://floating-point-gui.de/errors/comparison/).
The latter also provides the following link to unit tests that cover a lot of
edge cases: [https://floating-point-
gui.de/errors/NearlyEqualsTest.java](https://floating-point-
gui.de/errors/NearlyEqualsTest.java)

------
userbinator
Given the name I thought the site would have a GUI for playing around with
floating-point numbers, like this one:
[https://www.exploringbinary.com/floating-point-
converter/](https://www.exploringbinary.com/floating-point-converter/)

The "On Using Integers" page seems very narrowly applicable because it only
discusses fixed-point in the context of currency; in a lot of other
applications, notably DSP and other multimedia, fixed-point is the norm.

~~~
brazzy
The page is about using fixed point types to avoid binary rounding, which is
the underlying issue the whole website is about.

AFAIK in DSPs you use fixed point for a completely different reason, namely
because it's simpler and thus more efficient to implement in silicon.

------
gnufx
Wot, no Goldberg? E.g.
[https://dl.acm.org/citation.cfm?id=103163](https://dl.acm.org/citation.cfm?id=103163)
or
[https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)

~~~
brazzy
It's in the links of course, but I wrote the site specifically because I saw
it being used too often as a kneejerk response to basic questions, where it's
not helpful at all.

~~~
simonbyrne
Thank you! The Goldberg article is a terrible way to learn about floating
point, and the frequency with which it is referred to on StackOverflow is
really disheartening.

~~~
fargle
Why do you say that? It is an extremely good paper and provides a practical
and _high-level_ overview in an accessible (details to follow) format. If it's
still just too technical, then a) know there is are "subtleties" and b) use
double precision c) it's hard because... reasons.

~~~
simonbyrne
I disagree about practical and accessible:

\- its exposition is complicated (trying to prove everything in a general base
makes it difficult to understand)

\- it's woefully out of date (lack of guard digits haven't been an issue for
at least 25 years, extended precision hasn't been an issue for the past 10 or
so, and most languages now default to having fairly strict floating point
semantics)

\- it gets bogged down in irrelevant minutiae (rounding modes and exception
flags, while available in modern hardware, aren't really supported by any
modern languages/compilers)

\- it doesn't really provide any practical advice (it barely mentions binary-
decimal conversion, it jumps to doubling precision and Kahan summation without
suggesting any intermediate steps such as sorted or pairwise summation).

But my biggest complaint is the frequency with which users are referred to it
on StackOverflow as if (1) it is a good way to learn about floating point
concepts, and (2) anyone using floating point numbers should be expected to
understand it all.

~~~
fargle
I don't even disagree from a purist standpoint. And yes, I think everyone
should read it. But they won't all, and don't need to, understand it.

But um, so a sophomoric summary without the "complications" of proving it is
NOT better nor more accessible.

And I see none of your bullet points are improved upon in the linked paper.

------
fargle
This article is terrible. Referring to the Goldberg paper as "long article
with loltz of formulas" in a throwaway blog post shorter than most of these
comments is telling.

The "absurd" result that [1+2 != 3] * (0.1) isn't even the problem with
floating point. It's an even simpler misconception that "even" vs. "repeating"
fractions are the same for base 10 vs. base 2 fractions.

So we can understand that 0.33+0.33+0.33 != 1 in decimal due to rounding. So
when 0.1 (decimal) which is a truncated repeating "binary decimal" fraction
isn't exact we loose our shit. But why is base 10 good and base 2 bad? Why not
base 57? Clearly for any base, a truncated decimal-like fraction will be
inexact for a lot of numbers. Rounding errors will add up, but this is really
a binary vs. decimal issue.

More subtle are errors that grow simply because of issues specific to floating
point. For example re-normalization when numbers of very different magnitudes
are added or subtracted which eventually causes precision loss.

This is neither hard to understand, nor the only complication with floating
point, nor is anything wrong, or bad, nor should anything be fixed in
IEEE-754. This is just stuff that numeric programmers MUST understand, at
least at the rudimentary level.

~~~
brazzy
It's telling how you call the website "terrible", and then proceed to do
_exactly_ what it does: give a basic, factual explanation why floating point
math does not behave as people naively expect.

No, it is not hard to understand, but it is not obvious, and in fact
counterintuitive to most people. Which is why there is a need for a website
like this for people to refer to intead of the Goldberg paper which is great
if you actually need to do error propagation analysis on your numeric
algorithm but which does _not_ give this basic explanation and is therefore
_not_ suitable to be cited to people who are suprised that 0.1 + 0.2 != 0.3

So please explain to me how referring to the Goldberg paper as "long article
with lols of formulas" is anything but factually correct and a good reason to
provide an alternative?

~~~
fargle
OK, let's try again. I gave a correct summary of what I've learned from
Goldberg and elsewhere. In a short comment.

This article is "terrible" because it gives a single bad example and derides a
very good reference implying it's too complicated. And steals it's title.

Every programmer that uses floating point should read Goldberg. That blog post
is garbage. Post the real source, or an informative article, not junk.

~~~
brazzy
> I gave a correct summary of what I've learned from Goldberg and elsewhere.
> In a short comment.

So did I on a small website (yes, I'm the author). It's a website. Not an
article. Not a blog. There is more than the title page, which seems to be the
only thing you've looked at.

> This article is "terrible" because it gives a single bad example

Which example? What makes it bad?

> and derides a very good reference implying it's too complicated.

I absolutely do not "deride" the Goldberg paper, and it _is_ too complicated
for people without a CS background. That doesn't mean there's anything wrong
with the paper, or with those people. There _is_ something wrong with
kneejerk-posting a link to the paper as an RTFM to basic questions about
floating point behaviour. My intention was to provide a better resource for
those cases.

> And steals it's title.

Don't be ridiculous. I use the _format_ of the title, which lots of people do
(it's basically become a meme now, not sure if Goldberg was the first to use
it either), with a critical difference that you seem to have missed: Goldberg
adresses his paper to "every Computer Scientist", I address my website to
"every programmer". Different audiences.

> Every programmer that uses floating point should read Goldberg.

Maybe. But it is not a good introduction.

> That blog post is garbage. Post the real source, or an informative article,
> not junk.

You have still not been able to explain what is wrong with it. I am beginning
to think you have serious problems with reading comprehension.

~~~
fargle
> So did I on a small website...

I was responding to you saying that a comment in a forum that I made was
"doing the same thing" as your small website. But it's not equivalent. I
didn't make a website, not a paper. I didn't post it as a reference for the
community. I am making, what I hope you understand, is a good faith critique.
When the critique written in 3 minutes has as much technical merit (says you)
as the "paper", that is not good.

> "Which example?"

Your using an example 0.1 + 0.2 != 0.3. This is nothing to do with floating
point. This is an issue of choice of fraction base and an issue of precision
that could affect any non-analog computation.

> Don't be ridiculous

You understand _exactly_ what I mean. You replaced "Computer Scientist" with
"Programmer". Borderline infringement, borderline plagiarism.

> But it is not a good introduction

So did you give one? No. You summarized a very famous paper and tweaked it's
title. You provide no real advise for what to do. You provide no practical
examples of real world code for "those programmers" that just want to know
what to do.

> Maybe. But it is not a good introduction...

Why is it not good? I like that it puts the simple parts first and the formal
math later, so you don't get bogged down in it. You're as welcome to your
opinion as I am mine.

> I absolutely do not "deride" the Goldberg paper.

Yes you did. Let's say I said "Your website is oversimplified and difficult to
navigate. It is garbage." Did I deride yours? [yes. yes i did.]

> I am beginning to think you have serious problems with reading
> comprehension:

I have serious problems comprehending a disorganized web page with a web 1.0
navigation panel instead of a properly formatted paper.

> You have still not been able...

See above:

\- Your example is borderline wrong. It's about a different subject.

\- "Programmers" is not a good title when your audience is "Computer
Scientists" and "Software Engineers".

\- You changed the title in the HN post to obfuscate that you ripped off the
original title.

\- There is zero creative, or new information presented.

\- The most valuable part is the "References" page. In which every single link
contains more relevant and accessible information than your "site".

\- You aren't even linking to a definitive reference. But instead to an
appendix of a user manual Instead of, for example: [http://perso.ens-
lyon.fr/jean-michel.muller/goldberg.pdf](http://perso.ens-lyon.fr/jean-
michel.muller/goldberg.pdf)

I think it'd be great stuff if:

\- you wrote a little blog

\- that introduced the goldberg paper gently

\- that augmented and highlighted the references. If they are a little
complicated, then _explain_ them. Don't imply "too complicated, you should
skip".

\- that was able to be printed in a coherent format. Not 8 or 10 different
individual web 1.0 frames.

There's a reason academic papers are formatted the way they are. It's actually
for precise communication. If you try to "simplify" and make things too much
more "accessible" for regular "programmers", you may falsely believe you are
helping.

But this is a bad idea. Complex subjects do require some amount of
concentration and work to understand. Floating point subtleties are not the
lack of a zippy "programmer"-friendly website/blog away from that. You've
tried to dumb-down a very introductory paper. It's amateurish and doesn't
help.

I appreciate the references. Wouldn't it have just been easier to add them to
the Wikipedia article?

~~~
brazzy
>I am making, what I hope you understand, is a good faith critique.

"terrible", "junk" and "garbage" is not a good faith critique.

> When the critique written in 3 minutes has as much technical merit (says
> you) as the "paper", that is not good.

Except It's not a "paper" either, it's a website, with multiple pages, with
your "critique" being equivalent in content to maybe 1.5 of them.

> Your using an example 0.1 + 0.2 != 0.3. This is nothing to do with floating
> point. This is an issue of choice of fraction base and an issue of precision
> that could affect any non-analog computation.

I.e. it has _everything_ to do with floating point. And it happens to be an
example of exactly the kind of real, concrete problem that real people writing
real programs encounter in the real world and then look for help with.

> You understand exactly what I mean. You replaced "Computer Scientist" with
> "Programmer". Borderline infringement, borderline plagiarism.

I understand that you have no fucking clue what the words "infringement" or
"plagiarism" mean. There are now _dozens_ of articles out there using that
title format (and I would not be certain Goldberg was the first), just like
there were dozens of articles using the "X considered harmful" format, and no
sane person (and no lawyer either) would consider any of it "stealing",
"infringement" or "plagiarism".

> You summarized a very famous paper and tweaked it's title. You provide no
> real advise for what to do. You provide no practical examples of real world
> code for "those programmers" that just want to know what to do.

What the flying FUCK? I very much do all of those things. How about you
actually LOOK at the thing you criticize?

> Why is it not good? I like that it puts the simple parts first and the
> formal math later

It is still very much an academic paper and starts using formal math notation
already when introducing floating point formats, and theorems two sections
later, about 10% into the paper. The "simple parts" are already way too formal
and too general for most of the people who get directed at it, and thus it
fails to be helpful for them.

> Yes you did.

No, I did not. That is a lie.

> Let's say I said "Your website is oversimplified and difficult to navigate.
> It is garbage." Did I deride yours? [yes. yes i did.]

Yes, the word "garbage" is obviously derisive - and I used no such word to
describe the paper. Unlike your example, I did not even _criticize_ it
explicitly. The worst thing I wrote was that it "didn’t seem to help with your
problem" \- notice the "seem"?

> I have serious problems comprehending a disorganized web page with a web 1.0
> navigation panel instead of a properly formatted paper.

Well, that says more about you than about the website. Multiple other people
have praised the website's clean design and usability.

> \- Your example is borderline wrong. It's about a different subject.

Nonsense. It's an example of exactly the kind of problem people have and are
looking for help with.

> \- "Programmers" is not a good title when your audience is "Computer
> Scientists" and "Software Engineers".

My audience is programmers. People who don't think they're programmers can
feel free to ignore it. Or is your problem that it's not ideal for "Computer
Scientists" who also consider themselves programmers? So what?

> \- You changed the title in the HN post to obfuscate that you ripped off the
> original title.

What a load of bullshit. The HN post of this comment thread was not made by
me. The HN post I made in 2010 did in fact have the title format in question.
And the idea that it was "obfuscated" is just plain idiotic any way you look
at it.

> There is zero creative, or new information presented.

My main goal was not creativity or new information.

> The most valuable part is the "References" page. In which every single link
> contains more relevant and accessible information than your "site".

Says you. Pretty much everyone else seems to disagree.

> You aren't even linking to a definitive reference. But instead to an
> appendix of a user manual Instead of, for example: [http://perso.ens-
> lyon.fr/jean-michel.muller/goldberg.pdf](http://perso.ens-lyon.fr/jean-
> michel.muller/goldberg.pdf)

Now you're just making a fool of yourself. It's exactly the same content. Your
link is a scanned paper article with bad OCR that happens to lie around on
some scientist's (very much web 1.0, I might add) site and might disappear at
any time.

> I think it'd be great stuff if: - you wrote a little blog - that introduced
> the goldberg paper gently - that augmented and highlighted the references.
> If they are a little complicated, then explain them.

No, that would not be great. That would be an idiotic waste of time. The
solution for a reference that is already too long and complex for the target
audience's question is _not_ to add more stuff to it! We're not doing
literature analysis here!

> Don't imply "too complicated, you should skip".

That is not what I am implying (otherwise why would I list the references at
all?). It's "too complicated for you right now, here's the basics, now go back
there if you want more details".

> that was able to be printed in a coherent format. Not 8 or 10 different
> individual web 1.0 frames.

Format is always a matter of taste, but many people seem to like it. And you
keep using

> There's a reason academic papers are formatted the way they are.

Yeah, and the biggest reason is that they are designed to be published in
printed journals.

> It's actually for precise communication.

If you're seriously claiming that there is a fundamental difference between
one long article with separate sections, and a webpage with an navigation
index and some separate pages, then I really can't help you (but you need
help). And you keep using that expression "web 1.0". It doesn't mean what you
think it means. "frames" as well. You might want to inform yourself.

> If you try to "simplify" and make things too much more "accessible" for
> regular "programmers", you may falsely believe you are helping.

Many, people have in fact stated that my website helped them. So it seems you
are provably wrong.

~~~
fargle
> My audience is programmers

This gets to the heart of why I felt it necessary to comment at all, and as
you have gathered, am being derisive and negative toward your _website with
navigation panel_.

Programmers is not a term I like. It implies that you trust someone to write a
program. And the way you use the term, you do not need to be an engineer or a
computer scientist. Therefore we need to "dumb down" some super-to-complicated
content X.

For another example in the same genre:
[https://poignant.guide/](https://poignant.guide/)

This kind of narcissistic behavior and writing is amateurish and insulting to
those of us who actually do the fairly hard work of learning and working in a
formal field.

Just about every security vulnerability and _every_ broken memory hog of a
java program is due people acting and thinking like programmers, not
engineers. Computer Scientists are not the cause of rampant security
vulnerabilities.

~~~
brazzy
Indeed it seems we have arrived at the heart of our disagreement.

You are proud of having done "hard work of learning and working in a formal
field", and are accustomend to the writing and publishing style of that field,
and that has led you to strongly resent the implication that something which
does not follow that style could possibly be better in any way or for any
purpose than something that does.

And I believe that you are wrong, that the academic writing style is not the
most appropriate for every situation or every audience, and that there is
value in addressing different situations and audiences.

> Programmers is not a term I like. It implies that you trust someone to write
> a program.

It really only implies that someone _is_ writing a program.

> And the way you use the term, you do not need to be an engineer or a
> computer scientist.

That is a simple fact. And another fact is that even most people who've had
formal education in engineering or computer science typically lose fluency
with the formal academic style and notation once they leave academics, because
it is very rarely needed in their daily work.

> For another example in the same genre:
> [https://poignant.guide/](https://poignant.guide/) > This kind of
> narcissistic behavior and writing is amateurish and insulting to those of us
> who actually do the fairly hard work of learning and working in a formal
> field.

And here you are engaging in nothing but arrogant, elitist gatekeeping. You
don't have to _like_ that writing style (I don't either), but taking it as a
personal insult is silly, and the implication that there is only one
appropriate style for any writing that concerns programming is needlessly
close-minded.

> Just about every security vulnerability and every broken memory hog of a
> java program is due people acting and thinking like programmers, not
> engineers. Computer Scientists are not the cause of rampant security
> vulnerabilities.

I beg to differ. Being well versed in computer security and having formal
computer science or engieering education are not all that strongly correlated.
I would in fact bet money that more than 50% of the security vulnerabilities
on [https://cve.mitre.org/](https://cve.mitre.org/) were originally introduced
by someone with formal computer science education.

In fact, computer scientists with no industry experience are often _godawful_
at programming, including security and performance aspects, because they're
used primarily to writing throwaway proof of concept code that doesn't need to
be maintained.

~~~
fargle
So please point to all the security holes created by Lamport or Knuth.

I didn't say, by the way, credentialed engineer. I said "think like an
engineer". I'd even accept Hacker.

Those who are just "programmers" are essentially data-entry clerks. And EVEN
THEY can read Goldberg before doing floating point for at least they can get a
hint of the difficulties.

"Java Programmers" writing blogs and websites for "Ruby Programmers" to give
hints to "C# programmers" so they don't have to cut-paste from Stack Overflow
is harmful. It's noise and addition random bad information that a good
programmer, who should be engineering a solution, will have to cull out.

~~~
brazzy
> So please point to all the security holes created by Lamport or Knuth.

If the logical fallacy in this request is not glaringly obvious to you then
you have less understanding of computer science than most "data-entry clerks".

