Hacker News new | comments | show | ask | jobs | submit login
Linus: bool is dangerous in C if you don't understand it (lkml.org)
138 points by general_failure 1367 days ago | hide | past | web | 117 comments | favorite



Here's the advantage of C99's bool. Lots of C projects do something like this:

    typedef char bool;
and then maybe they use it like so:

    bool found_it = strstr(haystack, needle);
This has a serious bug that will only manifest 0.4% of the time. The problem is that strstr returns a pointer, and converting a pointer to a smaller integer type throws away the high bytes. If the resulting pointer is not NULL but happens to have a zero low byte, this bool will be false, even though the string was found.

Even if that doesn't happen, you'll often see code like this:

    #define true 1
    ...
    if (found_it == true) ...
which is wrong as well.

bool addresses these. Conversions to bool always result in values of 0 or 1, so both of the above problems are avoided.

That said, I agree with Linus. It's not well understood, and using bool in a library header may conflict with another definition of bool in the project. Also, compilers typically warn about at least the first error above. As a C99 feature, bool is too little, too late; had it been part of C89, things might be different.


"warning: initialization makes integer from pointer without a cast"

I mean sure, there are warnings and there are warnings. Absurdist pedantry about warning states isn't something I tend to worry about.

But if you're dealing with a code base that is building to completion without attention to this warning (which pops out with the default flags on gcc 4.7.2 -- no -Wall needed!), then you have more serious problems than can be fixed by a builtin pointer-to-bool conversion.


Arguably that warning just suggests that you put in a cast, which makes it no more correct. Then you get another warning "cast from pointer to integer of different size [-Wpointer-to-int-cast]" which apparently is also on by default, but I'm not sure it's immediately obvious to a casual C programmer how that will go wrong.


Again, I'd argue that if you have a programmer willing to throw in a double cast expression (!) to work around a warning without once thinking about what they are trying to do (in this case to check if a pointer is NULL) then you have problems that aren't going to be well addressed by a C99 bool anyway.

I mean, this example is a little contrived as it is. The "normal" way to treat a pointer as a boolean and branch off of it is just "if(ptr)", and that's worked without trouble on all compilers for 40 years now.

Is C sort of a mess? Yeah. But this isn't a particularly illustrative example IMHO. All languages have this nonsense (c.f. the Ruby/JS "Wat" video).


Casting pointers to integers isn't all that common to do. When I see that warning, I take it very seriously, because it almost always means I made a real mistake.


No it doesn't. Not to anyone who knows C anyway.


It's gotten better the last few years, but I still see that exact warning all the time in open-source projects. Nobody cares about warnings until someone mandates -Werror.


CFLAGS+=-Wall -Wextra -pedantic -Werror # have fun


This pointer-to-bool bug isn't theoretical, either—it happened to me just a few months ago. The Cocoa function NSAssert works on objects, and raises an exception on a nil pointer. A similar assertion in a library I was using was simply casting to BOOL (a signed char), causing the error you described. It was a nightmare to debug, since it was interacting with a black box that I believed was the source of the problem.


Another problem with that typedef and usage is that it assumes that NULL as an integer is 0. It need not be. The literal constant 0 cast to or assigned to a pointer type always works because they compiler treats that as a special case, and replaces the 0 with whatever the actual NULL bit pattern for that pointer type. E.g., "char * p = 0" might actually generate something like "move #0xFFFFFFFF, p" on a machine where 0xFFFFFFFF is the bit pattern for NULL char pointers.

A lot of people forget this, and think that NULL pointers must actually have a bit pattern that is all zeros.


Name me a common system where NULL != 0

You might as well be worrying about code compatibility with systems that have a 7 bit char, or some other irrelevance.


C++ member pointers are frequently represented as offsets into the class on vanilla x86. Given that 0 is an entirely valid offset -- the first member given a class without initial padding or vtable -- NULL must of course be represented by an entirely different, nonzero, bitpattern.


hmm? you're adding an offset to a pointer (the base of the object), which is still not going to equal 0 even if the offset is 0.


The C++ standard calls pointers to members... well, pointers to members. I'm not offsetting anything: The compiler is on my behalf. It could create function stubs for all I care, similar to how one might emulate this feature in C#.

  #include <iostream>

  struct Foo
  {
  	int bar;
  };

  int main()
  {
  	int (Foo::*null_pointer) = nullptr;
  	int (Foo::*first_member_pointer) = &Foo::bar;
  
  	std::cout << *(std::ptrdiff_t*)&null_pointer << "\n";
  	std::cout << *(std::ptrdiff_t*)&first_member_pointer << "\n";
  }
Running this code after compiling it with VS2012 results in, on my machine:

  -1
  0


That's simply not true. There is the rule, that NULL must be equal to (void *)0. I had also several discussions about that topic, but it's part of at least C99 and C11.


Yes, (void * )0 gives a null pointer. I mentioned that. However, that doesn't mean all the bits of a null pointer are required to be all 0. The actual machine representation of a null pointer is implementation-defined. The only requirement C99 places on the actual value of a null pointer is that it compares unequal to a pointer to any object or function, and any two null pointers compare equal. See section 6.3.2.3 paragraphs 3 and 4:

   An integer constant expression with the value 0, or
   such an expression cast to type void *, is called a
   null pointer constant. If a null pointer constant
   is converted to a pointer type, the resulting pointer,
   called a null pointer, is guaranteed to compare unequal
   to a pointer to any object or function.

   Conversion of a null pointer to another pointer type
   yields a null pointer of that type. Any two null pointers
   shall compare equal.
If you do this:

   int pv = 0;
   char * p = (char *)pv;
you are not guaranteed to have p set to a null pointer, because the zero you are casting and assigning is not an integer constant zero. The behavior is implementation-defined, as is going the other way. Section 6.3.2.3 paragraphs 5 and 6:

   An integer may be converted to any pointer type. Except
   as previously specified, the result is implementation-defined,
   might not be correctly aligned, might not point to an entity
   of the referenced type, and might be a trap representation.

   Any pointer type may be converted to an integer type. Except
   as previously specified, the result is implementation-defined.
   If the result cannot be represented in the integer type, the
   behavior is undefined. The result need not be in the range of
   values of any integer type.


This should be okay, though:

   char *p1 = 0;
   int pv = (intptr_t)p1;
   char *p2 = (char *)pv;
I.e. NULL must round-trip through intptr_t.


As a really bad C programmer I have a question, if you don't mind my asking:

Is that the kind of knowledge/insight you gain from understanding C when studying it or when you are bitten by a bug ?


You get that kind of insight by reading the specification. If you are anything like me, you'll only actually read it when you are bitten by a bug.

But that's not universal, there are people that like to know the ins and outs of the language before using it... and yes, they are the ones doing the right thing.


>> As a C99 feature, bool is too little, too late; had it been part of C89, things might be different.

I'm failing to understand this line. Can you be more explicit why it's bad now, and would have been good earlier?


Because lots of compilers in common use, particularly from Microsoft, don't support C99 so to work around their assheadedness you have to put in compatibility workarounds. Those workarounds behave differently from compiler-implemented bool, and the risk of errors from those inconsistencies is significantly greater than the benefit of using bool in code.


"Lots of" compilers in common use? Can you name one besides Microsoft's?


Linus is usually dead-on in language matters.

bool as a type in C strikes me as dubious. In C, you need to care about the underlying representation of things. I'm not even sure it was a good idea to call a char a char, because it's actually a small integer. Calling it a char is just misleading; it's signed, and a literal character is not even a char. It's a source of bugs. "bool" sounds potentially worse.


That needed to be clarified and was missing in Linus reply. It is a very important point, from the language perspective.


Probably better:

    bool found_it = !!strstr(haystack, needle);


In the example you gave, any compiler from the last 10 years would have given a warning message when converting the pointer to an int. Ignoring error messages is extremely, extremely bad practice. If you are still doing that, I suggest auditing your code right now to see if there are more problems such as mismatched printf arguments and so forth.

bool is a bad thing in C and C++, because of the implicit conversion rules. In C++, pointers implicitly convert to bool in every context-- there is never any warning message. This effectively reduces the compiler's ability to typecheck your program, since so many variables are pointers already, and C++ will happily stuff them into any bool argument to a function.

It seems like the C99 _Bool type implements the same implicit type conversion brain damage. That makes it a step backwards in terms of type safety, not forwards. That's right, using the old-fashioned, fuddy-duddy, plain old int and 0 and 1 gives you better type checking than the shiny new C++ feature.

As Linux points out, the worse typechecking comes with a side order of compatibility problems. And it is not any more efficient or readable.


It seems like the C99 _Bool type implements the same implicit type conversion brain damage.

It does: "When any scalar value is converted to _Bool, the result is 0 if the value compares equal to 0; otherwise, the result is 1." (from §6.3.1.2 of the N1256 draft)

Compilers will happily compare/set pointers to 0 since that's the null pointer constant (§6.5.9, §6.3.2.3), so there's no warning:

    % cat foo.c
    #include <stdio.h>
    int main() { _Bool b = "blah"; printf("%d\n", b); return 0; }
    
    % gcc -std=c99 -W -Wall -Wextra -o foo foo.c
    
    % ./foo
    1


Have you ever actually encountered a bug involving accidentally passing a pointer to a 'bool' function argument? I've seen many obscure kinds of bugs, but don't remember ever in my life seeing one of those- so I suspect this is a red herring.

(And of course if the value passed happens to be an int rather than a pointer, typedefing bool to int won't save you!)

edit: And as mentioned in the rest of the thread, while it's unjustifiably confusing to, say, pass 'p' as a boolean parameter rather than 'p != NULL', it's not unreasonable to do something like

    typedef int bool;
    bool foo_enabled;
    void set_foo_enabled(bool enabled) {
        if(enabled == foo_enabled) return;
        /* new value is different, do some work */
    }
    set_foo_enabled(1);
    ...
    set_foo_enabled(flags & ENABLE_FOO);
Though I haven't come across this kind of bug in practice either.


I've encountered a bug like that when the callee was converted from taking a pointer to taking a bool and the caller was not updated because it compiled fine...


Even after reading all the comments I am not sure should I use bool in C code or not. I am C++ programmer only starting to have some fun in C.

Linus says: If "bool" had real advantages (like having a dense array representation, for example).

But doesn't bool have the advantage of reducing perceived complexity of the code and making the code more understandable? If the function returns int, one might assume it is some number, and he would have to use documentation or look at code samples to find out the int is only being compared to 1 or 0. Type bool instantly tells there's some kind of check or flag that is returned, and makes it undoubtedly easier to tell what's going on.

As for casting rules other mentioned, doesn't the C compiler warn about anything converting to anything that can store less information than what it converts from?


Normally you get a warning when the destination type is smaller, but with C99 bool you don't because that's how the standard is written: any non-zero value of any other type is implicitly converted to "true".

That's why C99 bool typecasting problems are so subtle. If you were previously using typedef char BOOL and you switch to the "real" bool, you don't get compiler warnings about suspicious casting anymore.


but you don't have the bug either, right? because it doesn't take only the first byte. so what's the problem? (genuine question - have never used _Bool and am trying to get a handle on this thread).


Indeed, the C99 bool fixes this particular bug, but it can introduce new bugs because its semantics are different from other C types. C99 bool allows you to assign a larger integer or a pointer to a bool with no warning. As illustrated by the grandparent commentor, the general assumption is that the C compiler will always warn you about these situations.

If bool had been always been part of the standard, people would be aware of this issue. But it wasn't, and so the C world is full of BOOL typedefs that can be chars, ints or whatever, and it's easy for programmers to implicitly assume that the "real" bool behaves like the typedefs they're used to.


>But doesn't bool have the advantage of reducing perceived complexity of the code and making the code more understandable?

You cannot reduce "perceived complexity" by adding subtle error cases, i.e actual complexity.


My personal advice:

If you're starting a new codebase and don't have to care for legacy compilers, use it. Otherwise, it's a case by case decision.

I agree that you should never typedef a type different than _Bool to bool if you want to avoid a world of hurt, and I'd argue that using true and false in a C codebase is harmful as well.


Anyone who writes C understands the int -> Boolean expression just fine. There's nothing more clear a out bool. Really, it's not a tough rule to grok.


Really though, what isn't dangerous in C if you don't understand it. (Hell, most of C is dangerous even if you understand it).


I think the point he is trying to make is that most people think they understand it when they really don't.


The biggest upside is better readability of code, esp. when you're glancing over documentation of functions.


> Linus: bool is dangerous in C if you don't understand it

What in C is NOT dangerous if you don't understand it?


What in <any language> is not dangerous if you don't understand it?


Strong typing?


...fair enough.


Never understood why we needed a boolean type, myself. What is wrong with 1 and 0?


Because in a statically typed language, you want the type of a symbol to represent the set of values it can be bound to. We often have the set {something that passes a truth test, something that fails a truth test} in our programs, so a boolean type is a nice way to represent it.

Now, this is not the same thing defining what exactly the truth test is. It's perfectly reasonable and arguably a good design to have both a narrow boolean type and a wide definition of truthiness.


What's wrong with anything that's not false is true?

If you treat it this way you remove the vast majority of the problems with most C or C++ BOOL implementations.

The low bytes of a pointer being 0 (It can happen on Windows with VirtualAlloc and company) can screw up the works, so avoid assigning a pointer to a BOOL and use BOOL b = !!pointer if you have to break that rule.


Why not make the correct comparison and do BOOL b = pointer != NULL. Now the code says what you mean, is easy to understand, and assigns a boolean value to a bool.


!!p does the same as p!=NULL. The choice is purely stylistic. (!p is equivalent to p==NULL; !!p is therefore equivalent to !(p==NULL). Or, alternatively, p!=NULL.)


Because the properties of other types might not translate. It's probably okay with integer types since (a + b) => (a OR b) and (a * b) => (a AND b) as long as you stay away from upper bounds.

But some semantic maps are less great. Having your compiler help you to ensure you're not implicitly mixing up bad maps can be helpful.


For succintness' sake: you only need "0" a.k.a. FALSE. All else is TRUE.


C++ tried to use template specialization with std::vector<bool> to take advantage of bool-ness and provide a dense storage for bits, but it suffers for reasons Linus points out here, namely the inability to cleanly address individual elements of the vector (so it doesn't behave exactly like a normal std::vector).


Right. At the time, as I understand it, the standards committee thought reference semantics and operator overloading would be powerful enough to create perfect "proxy" objects that would be indistinguishable from their direct equivalents. std::vector<bool> ended up demonstrating the opposite.

std::bitset<N> is quite nice though, as long as you have a fixed size.


Don't think anyone has mentioned this:

bool is also subject to integer promotion so when you pass a bool to a function or do some integer arithmetic it can become an int.

From the text K&R, C Programming Language, 2nd Ed. p. 174

A.6.1 Integral Promotion

A character, a short integer, or an integer bit-field, all either signed or not, or an object of enumeration type, may be used in an expression wherever an integer may be used. If an int can represent all the values of the original type, then the value is converted to int; otherwise the value is converted to unsigned int. This process is called integral promotion.


Using bool in C, as well as C++ has distinct advantages. If Linus or some idiot cannot control himself from making mistakes, and from causing serious damage to the code, it is his fault. You can always aim it at your foot given any construction tool. I don't care if fools can't control it, but I wouldn't say no to bool.


Wow, I'm sure there's plenty of things to vehemently disagree with Torvalds on and even to criticize him - manners, politics, management style and even the finer points of programming. But I would have thought he'd earned his stripes enough to at least not to be called an idiot on his C skills on HN. But apparently not. Next up, Dennis Ritchie!


Just as inexperienced as this Kerningham guy.

On the other hand, I'm happy if my C code compiles, so I shouldn't throw any stones here.


Sounds like someone with limited experience with OPC... (Other Peoples Code)

Ever worked on a big code base where some programmers were in Vietnam, some in Russia, some in China and most here in the US? Conventions are different and pride and communications issues make it hard to correct behavior.

Ever worked as the commit point for an offshore group? Code review?

Didn't think so...


Just for a reference, how many decades of C experience are behind your comment? Nothing personal here, it is just one really need that information to put your reply into context.


Why?


Well,... wanted to give OP a chance to put his/her comment into context. It is just some times, on HN, you really don't know what kind of experience is standing behind the comment.


This is the only forum I've ever been on where I can be presently surprised by how much experience actually is behind a statement. Its a nice change for once.


If you have the option to choose between a hammer that will hit your own finger and one that will stop short when it detects that (with no false positives), all other considerations being the same; and you willingly choose the first hammer, you are a fool.

Basically the true idiot is not one that makes mistakes, but one that assumes only idiots make mistakes.


but, from what i can tell, _Bool actually prevents mistakes. it works consistently and logically like a boolean type.

the mistakes come from pre-C99 code that calls things boolean, but has unexpected dependencies (that _Bool removes) on bit-level structure.

the more i learn from this thread, the more i want to use _Bool. which worries me, because Linus isn't dumb. but i think(?) the point he is making is related more to legacy code that is "broken but works". or maybe to programmers that rely on/use/exploit bit-structure/implementation details in "bool" types because they don't have experience with more strictly typed languages. [edit: or as mbell said in a reply to me elsewhere, because at os level you often do need to care about bits]


I've certainly found C++ bool helpful, in terms of readability and self-documentation, and the general positive clarifying effects of making sure you've got your types straight (to the extent that C++ gives you any help with that). C++ bool does the right thing, as far as turning values into true or false goes (the cast effectively does a "!=0", making the result exactly 1 or 0). I believe that C99 _Bool behaves in the same way in this respect - so that would be doing the right thing too.

(One issue with "native" bools, that others have pointed out, is that it's possible to introduce bugs due to implicit conversions. But I've personally found this behaviour usually to be what you want, and I don't remember having to fix any bugs caused by it, suggesting that they can't have been all that difficult to sort out.)

His point about casting to fake bool types is well made, and I wonder how many instances of this I will now spot? (gcc appears to issue perfectly fine warnings for this case, though.) Another argument for using the built-in types, in my view. Or for switching to "typedef uintptr_t bool".


Nothing is the C family simply prevents mistakes. It's always more complex than that.

The bool type prevents a lot of kinds of mistakes, but opens your code to a lot of new types. The problem is that if you have a lot of experience in not using it, you already learned how to avoid those first mistakes, but all that experience is useless for dealing with the newly introduced ones.

My opinion is that the bool is a good construct. It's a good idea to use it on new code, but first you must understand it, otherwise you won't get a fighting chance.


Interesting that you fail to name any of these advantages.


"variadic macros and bool are just adornment" in The future according to Dennis Ritchie (a 2000 interview) [1]

[1] http://www.itworld.com/lw-12-ritchie?page=0,1


I recall a recent post that looks at other definitions of the boolean ... http://nshipster.com/bool/

We need to develop one universal standard... :)


I read the article and in fact the versions of stdbool look pretty close to what he said.

Maybe there is some subtly there I am missing but I do not see it.


The subtlety is that bool expands to _Bool, not int. So bool is actually a macro, not "typedef int bool" as in Linus' example code snippet.


The important bit seems to be the behavior of the _Bool type, not the macroness (also, the [Future Directions section of the Open Group spec](http://pubs.opengroup.org/onlinepubs/007904875/basedefs/stdb...) says #undefing and redefining the macros is obsolescent, leaving open the possibility of changing the implementation from a macro to something else entirely).


Linus' example code was showing a typical compatability hack, that can easily cause unintended consequenses and hard-to-debug problems.


I think it's fair to say just about anything is dangerous in C if you don't understand it...


This documentation for stdbool.h says exactly what he says is wrong:

http://pubs.opengroup.org/onlinepubs/007904875/basedefs/stdb...

It's interesting though... does anyone know of specific cases of the problems he's refering to?


Consider a simple bit mask operation, assume 8 bit ints for the sake of brevity:

Prior to C99, assuming you use the mentioned typedef for bool:

    bool a = someInt & 0x02
'a' will be 0x02 if the bit is set.

In C99, bool is aliased to _Bool and if the flag is set, the above code will result in 'a' being 0x01 because of C99's requirements for type conversion.

To accurately get the same behavior prior to C99, you can add !!, e.g.:

    bool a = !!(someInt & 0x02)  //'a' is now 0x01 when the bit is set.


Most, if not all, of the problems people are describing in this discussion come down to trying to assign something that isn't a boolean value to a variable of boolean type, and expecting it to do something sensible. I don't see how this is ever going to have a happy ending.

If you had written

    bool a = (someInt & 0x02) == 0x02
or something similarly clear and unambiguous, nothing odd would happen, even in C.

(Edit: OK, that's not strictly true, because of the operator precedence order. It's never made sense to me that integer arithmetic operators have higher precedence than comparisons but bitwise logical operators have lower precedence, so if you remove the parentheses above then the resulting code doesn't do what you'd expect. I suppose this is because I'm looking at the problem as if comparison operators return a proper boolean value rather than an integer, and the ordering we've wound up with in C dates from a historical oddity about 40 years ago.)

The underlying problem with booleans in C99, as Linus and others have been saying, is that the language doesn't actually enforce basic type safety, so cases like your first example

    bool a = someInt & 0x02
that should result in a type error are allowed through, and with odd results: how does it make any sense for a boolean variable to have an integer value like 0x01 or 0x02?

Then programmers who relied on such odd results wind up writing horrific code like your second example

    bool a = !!(someInt & 0x02)
where fudge factors build on top of distortions to make the old hacks work.

And then we wonder why in 2013 we still have widely used, essential software that is riddled with security flaws and crash bugs. :-(


The reason that & and | have lower precedence than the comparison operators is indeed historical - it's because the earliest versions of the language didn't have the logical boolean operators && and ||, so the bitwise & and | operators stood in for them. The lower precedence meant that you could write:

  if (x == 1 & y == 2) {
..and have it do what you meant. This became a bit of a wart when the && and || operators were introduced (still well before ANSI standardisation), but it was considered that changing it would have broken too much existing code.


    bool a = someInt & 0x02;
is perfectly fine in C99. 0 converts to false, non-zero converts to 1 when assigning to a _Bool.

What's not fine is people creating their own compatibility booleans where they define true as 1, as that would indeed break(rather odd..) code such as

    bool a = someInt & 0x02;
    if (a == true) 
If the bool above is not the C99 _Bool, but just a typedef to another integer type, you end up with if(0x02 == 1) evaluating to false.


> Then programmers who relied on such odd results wind up writing horrific code like your second example

What's horrific about that? Would it be better if we had:

    #define to_bool(_X) !!(_X)
    ...
    bool a = to_bool(someInt & 0x02);


but this - and other examples here - seem unfair, in that they are blaming _Bool for working correctly (consistently, logically) in cases where people were previously doing dumb things.

if you rely on something called "bool" being 0x02 you're going to have a bad time. that's hardly C99's fault.

your last line of code is what i would write, effectively, if i needed to compare booleans. it seems to me that _Bool is an improvement because, pre-C99, if i forgot the !! dance somewhere, i likely had a bug. with _Bool things just work.

(disclaimer, as with other reply here - still trying to get a grasp on this, so may be saying something stupid myself).


I don't really disagree, but Linus's point, which I agree with, is that C99's implementation is 'better' but is still pretty bad, i.e. 'bool' is still a mask for 'int' and as a result array's of bools aren't what they should be: bit arrays, directly serializing a bool is still sending out an entire int of data, etc. The 'improvement' in C99 isn't worth the broken code that will result from the subtle differences.

It's also worth considering in context that a lot of the code which will run into problems with these small differences is low level OS/driver code that often deals with a lot of bit flags and bit manipulation in general. When your trying to fit a web server into 3800 _bytes_ of ram on an 8 bit microcontroller, 'doing dumb things' becomes 'being inventive'.


C99 6.3.1.2 requires that converting nonzero to _Bool yields a 1. An int doesn't work that way.


    bool a = value & (1 << 5)
a will be 1 or 0, not 1 << 5. You don't get this behavior with a normal int. MSVC also has a warning about some of this behavior [1], with a nonsense performance subtext. I don't think theres a GCC equivalent.

1: http://msdn.microsoft.com/en-us/library/b6801kcy.aspx


Seems like the only reason you'd expect it to be 1 << 5 is that you've been working with a broken definition of bool using #define.

In any sane language you can't redefine bool that way, nobody would ever expect bool to take more than two values, and there wouldn't be a problem.


But, this is the case in every other language I can think of. IE, bools have one of two possible states; as a human 1<<5 is neither true nor false.


You missed the part where _Bool is not the same as int.


Note that he also denounces the implicit conversion rules.


Nobody ever said bool was a bit.


Well... Everything is dangerous in any language if you don't understand it.


That may be true. However something is more dangerous if it is very easy (or common) to mis-understand it. I think that is the point that Linus was trying to get at -- bool is dangerous because it is likely to be misunderstood.


X is dangerous in Y if you don't understand it. Indeed.


Sandwiches are dangerous in astronomy if you don't understand it?


Of course. Imagine, during the assembly process, someone eats his sandwich while hanging over the mirror of some space telescope. Do you think the all those dots you're seeing on the NASA images are stars? And once they discover the peanut butter nebula, they have to update parts of the astronomy books.


for all y in Y, Y is the subset of all computer programming languages, y is dangerous if you do not understand Y.


I wonder if Linus has ever tried better languages.


Ruby on Rails and your single-origin latte is calling. C is the most performant for years now, a large part of the kernel is in assembly. Remember to download your gems tonight, rude boy.


Are you proposing the kernel be rewritten in a "better" language?


I have no idea how you derived that from that comment.


He derived by you saying "better".

For what he does there is NO better language than C.

So what would be a "better language"? Haskell? In what way would it be better -- since it wouldn't be better for the tasks he wants?

Abstract better?

Sorry, I don't believe in that.


You forget you're on HN where slow ass designer languages are all the hail. Sigh..


Is C the best possible choice for Git? For Subsurface? For the embroidery template converter thingie he wrote for his wife?

Linus used C for all of those, but was it because C was technically the best choice in each case, or because it was good enough to get the job done and because it's the language he's most comfortable with?


Linus is a bit obsessed with making the software he uses fast. I don't think it'd be as easy to make git as fast as it is if it was written in something other than C.

Subsurface though probably could be written in a different language and not feel any slower.


>Is C the best possible choice for Git? For Subsurface? For the embroidery template converter thingie he wrote for his wife?

Sure -- since he also wrote those himself, so a good requisite would be "language Linus can quickly write shit in".

And for git there are other reasons too: portable, fast, lots of people can hack in C, etc.


> "Is C the best possible choice for Git?"

Well, I mean...

  SLOC  Directory       SLOC-by-Language (Sorted)                              
  100691  top_dir         ansic=80140,perl=10458,sh=7523,python=2570           
  98482   t               sh=97926,perl=546,ansic=10                           
  38293   builtin         ansic=38293                                          
  22256   contrib         sh=9888,perl=5838,python=3130,lisp=1786,ansic=1449,  
                          php=120,csh=45                                       
  18056   compat          ansic=18004,perl=52                                  
  13754   git-gui         tcl=10299,sh=3455                                    
  10859   gitk-git        tcl=10745,sh=114                                     
  6225    gitweb          perl=6225                                            
  5400    perl            perl=5400                                            
  2350    xdiff           ansic=2350                                           
  1288    vcs-svn         ansic=1288                                           
  689     git_remote_helpers python=689                                        
  292     Documentation   perl=155,sh=137                                      
  266     templates       sh=266                                               
  203     block-sha1      ansic=203                                            
  173     ppc             asm=98,ansic=75                                      
  0       mergetools      (none)                                               
  0       po              (none)                                               
                                                                               
                                                                               
  Totals grouped by language (dominant language first):                        
  ansic:       141812 (44.42%)                                                 
  sh:          119309 (37.37%)                                                 
  perl:         28674 (8.98%)                                                  
  tcl:          21044 (6.59%)                                                  
  python:        6389 (2.00%)                                                  
  lisp:          1786 (0.56%)                                                  
  php:            120 (0.04%)                                                  
  asm:             98 (0.03%)                                                  
  csh:             45 (0.01%)
Git's design is such that different parts of it can be written in different languages with no hassle. The majority is in C, but typically new features are prototyped out in other languages first until it becomes clear that speed will be important (and it generally will, since a major usage pattern of git is scripts and other commands calling your command many times in a row. Does git-add need to be fast if you're just doing 'git add ...'? Perhaps not. Does it need to be fast if my fancy-smancy script is wailing on it several thousand times? Yeah; particularly it needs to have a fast startup time. Does git-difftool need to be in C? Probably not, which is probably why it is still Perl instead. Test-cases? No reason in the world for them to be in C, so they're in sh.)


Windows has large portion of its code written in C++ MacOS has large portion in embed C++ and objective C Like any good video game, NetBSD supports kernel scripting in LUA

Linux implements some C++ features like the `virtual`.


Linus has stated his opinion about C++ code at the kernel. You just have to search it, you'll find.

The TLDR is very alike his TLDR for bool. C++ is too complex, it's hard to know about everything it's doing behind the scenes, and he wants total control of the code on the kernel.


> you saying "better".

I didn't make the original comment.


Well assembly is unwieldy for writing the entire kernel, but he's written portions of it that way.


My bool is a char. I'm not going to add junk to my compiler for actual bool support. I guess I could explain that. I have a ToBool(StrStr()) function.


C: the language that can't even get a boolean type right.

And that stuff was standardized by a committee? Wow.


Why should C get boolean write. As a language, it is very close to the hardware, and the hardware doesn't actually have the concept of a bit. The smallest unit that modern processors expose is a byte, so it makes sense that the smallest datatype C has is also a byte.


Hardware certainly has the concept of a bit, otherwise it couldn't have bit operations, it doesn't have a data type that's a bit though, but it neither has structs or in some cases long longs or pointers.

bool helps statically analyzers better as well.


No need to define boolean in terms of a bit just because that's all it takes. There's no harm in being redundant and using a whole byte for it, so that -1 is true and 0 is false.


There are 2 kinds of languages, my friend. Those people complain about, and those no one uses.


Paraphrasing Bjarne Stroustrup.


>C: the language that can't even get a boolean type right.

And that powers 99% of the software that matters.

Name me one language that you cannot say similar BS about some aspect of it.


Brainfuck!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: