

C Ints are Finite Numbers - anon1385
http://blogs.freebsdish.org/theraven/2013/08/17/c-ints-are-finite-numbers/

======
simias
As someone who write C for a living, this _Generic macro scares me a little.
It looks like C++ templates using C macros, and like everything with the C
preprocessor it's a hack on top of a hack.

We all know the C macrolanguage has a ton of shortcomings. But adding all
those weird, un-C-like behaviours just makes code harder to understand IMO.
You don't expect templates in C code. I don't have anything against C++, but
what's the point of C if it just becomes "like C++ but with macro hacks".

If the people responsible for the C standard feel the macrolanguage needs
fixing, they have all my support. But let's actually fix the language or
design a new one, not add quick hacks that change the semantics of the C code
for, IMO, small gains.

~~~
Someone
<pedantic>_Generic is not a macro; it just is used a lot inside macros. You
should see it as a switch on the type of its argument.

This also aren't templates; you still have to write all those variants for a
function, and give them unique names.

More importantly, I think this is worth the hassle. Before this, you couldn't
really do

    
    
       typedef double number_t;
    

and then easily call functions in the standard library such as sin, acos, or
pow without having to worry about the actual type of that typedef. _Generic
allows library developers to hide the ugliness of having different names for
similar functions. That might have been possible using preprocessor
metaprogramming, but it certainly wasn't simple.

~~~
simias
A fair point, but I think your example demonstrates what I'm worried about:
for each legitimate use of this construct there will be a billion of
bogus/leaky implementations that we'll have to debug in a couple of years.

If I want to write "sin(a);" and let the compiler figure out which
implementation of sin to use, I can already do that today. I use C++.

It's a bit late to turn C into a strong-ish typed language...

~~~
Someone
If you accept _" If I want to do X, I can already do that today. I use C++"_
as an argument, I don't think there is much to be done improving C. You may be
able to add a header file here and there before C++ gets it, and you may be
able to fix some dark corners of the language, but the big picture would be
one of stagnation.

Stagnation in programming languages leads to death, so you would accept that C
will eventually die, with C++ being its replacement/killer. That is not
necessarily a bad thing, and one could argue that it already is happening (are
there still self-hosting C compilers?), but I think it is something one should
realize the moment one uses that argument.

~~~
Aldo_MX
My understanding is that programming languages exist because they attempt to
solve an existing problem under a defined paradigm.

I'm not against borrowing features from other languages when they can be
ported without affecting the paradigm, but when they do... does it mean the
language will die without a specific set of features?

I believe it's the project manager's responsibility to evaluate whether the
problems a project will solve can be solved with a specific language before
taking the decision to use it.

A language will never be an "all-in-one solution" and forcing a language to
solve a problem it was not designed to solve will lead to extra work in the
best scenario, ex. String Manipulation in C.

My perception about C is that it was the best option available when it
emerged, but newer languages emerged that managed to solve specific problems
in a better way than C.

For me this doesn't mean that C is a "dying language", but the completely
opposite, C has become a specialized language suitable for use cases where
higher level languages mean "expensive overhead", but lower level languages
mean "expensive complexity".

------
Strilanc
Odd choice of title, given that the post is about 'isinf(someInt)' being a
compile error (as opposed to treating ints as finite and returning false).

~~~
IanCal
Well I suppose it probably should be. `isinf(someInt)` makes no sense to call
and is most likely a logical error. Moreover, it's not defined, so you could
have (pseudocode)

    
    
        isinf(x)
          if (isint(x))
             return random() < 0.5
          else
             ...
    

I think the title makes sense, as it's about c ints being finite, and
therefore nobody should be calling "isinf" with them. It was interesting that
several codebases were.

~~~
gweinberg
Isn't this really an error in the spec though? Based on what the words mean,
isinf and insnan logically should always return false when called with an int
argument.

~~~
IanCal
Only in the same way that it's an error that the spec doesn't define isinf for
strings.

------
teddyh
I thought this would be about the dangers of not checking for integer
overflow.

I struggled to come up with a method which, in C, safely converts a string to
an integer type. I finally did it¹, but I've yet to see someone else use this
method.

1)
[http://stackoverflow.com/a/542833/54435](http://stackoverflow.com/a/542833/54435)

------
solinent
I can imagine this happening in template code:

    
    
        template <typename T>
        bool isNumberInfinity(T number)
        {
            return isinf(number);
        }
    
        template <>
        bool isNumberInfinity(Number number)
        {
            return number == Number::INF;
        }
    
    

So instead of specializing ints, we rely on isinf working for ints, instead of
doing the "correct" thing:

    
    
        template <>
        bool isNumberInfinity(int number)
        {
           return false;
        }
    

This is a very contrived since I'm using template functions, which are
practically useless, but it holds for any data structure which holds a type
that is meant to behave somewhat like a "number".

~~~
pbsd
What do you mean by "template functions are practically useless"?

------
com2kid
Can anyone here provide an explanation for why some code that is supposed to
be high quality and trusted is calling isnan(int)?

That seems to be the more pressing issue here, not the (long since done)
inclusion of a new feature in C!

~~~
rcfox
Usually, whenever there's something weird being done in C, it's because of
macros or other forms of code generation.

However in the case of floating point vs integers, it's also likely that a
literal is missing a decimal.

    
    
        1/4 != 1.0/4

