Hacker News new | more | comments | ask | show | jobs | submit login
The joy of max() (lwn.net)
316 points by thefox 10 months ago | hide | past | web | favorite | 127 comments



The most insane macro in that is this monstrosity:

  #define __is_constant(x) \
      (sizeof(int) == sizeof(*(1 ? ((void*)((long)(x) * 0l)) : (int*)1)))
If x is a compile-time constant, (void* )((long)(x) * 0l) is equivalent to NULL, otherwise it's just a regular void* . And if both sides of a ternary are pointers, the rule is:

If one side is NULL, the type of the ternary should be the type of the other side (int* in this case).

Otherwise, if one side is a void* , the type of the ternary should be void* .

So if x is constant, the type of the ternary is int* , and if it's not constant the type is void* . The sizeof(* ()) around it turns that into sizeof(int) or sizeof(void), and in the GNU dialect sizeof(void) is 1, which is different from sizeof(int). Whew.

What I want to know is what's wrong with __builtin_constant_p.


> What I want to know is what's wrong with __builtin_constant_p.

1. __builtin_constant_p might not return a constant--it might return a magic value that says "I don't know yet, ask me again at a later optimization step". __builtin_choose_expr needs to be evaluated pretty early, so "ask again later" isn't good enough for it.

https://lkml.org/lkml/2018/3/18/268

2. It doesn't really care if it is constant in the __builtin_constant_p-sense, it cares if it has side-effects.

> ({x++; 0;}) is constant in the eyes of __builtin_constant_p, but not side-effect free.

https://lkml.org/lkml/2018/3/18/316


I'll also drop in another link - https://lkml.org/lkml/2018/3/20/845 that D Wheeler pointed out in LWN and this quote from Mr T's reaction:

>That's not "an idea".

>That is either genius, or a seriously diseased mind.

>I can't quite tell which.


I think the above macro works without knowing the size of void or int.


It relies on sizeof(void) and sizeof(int) being different, and also on sizeof(void) being legal (it's only legal in the gcc dialect).


It would be nice to credit this explanation to Linus.


No, I wrote it myself. I figured out how it worked from reading the C standard.


No wonder I thought I had read this before.


Yea I mean isn't parent's comment just a ripoff of that entire thread?


I often find it amusing how C advocates complain about the complexity of C++, and then proceed to implement the same complex functionality in even more brittle ways. Language features I have seen C developers emulate poorly in C: constepxr (here), virtual functions (with function pointers), templates (with macros), exceptions (with setjmp/longjmp).


The only argument for C and against C++ is the slippery slope of wanting to use one useful feature and ending up using a thousand features where noone is really sure what's actually happening in the end. Then again they start re-inventing "C with classes" using unsafe structs and macros. It seems that the problem is not really with the language but the developers who cannot restrain themselves.


> developers who cannot restrain themselves

That’s pretty much all of us IMO. We need to get stuff done, and if there’s a feature available that lets us do it quicklier, we use it. We take the path of least resistance—a warning saying “don’t” provides no actual resistance.

That’s why I like languages like Haskell that by default prevent me from doing anything I’m probably going to regret later; if I want to break referential transparency, type safety, memory safety, &c. then I can always opt in explicitly and grep for it later when it hits the fan.

That said, I do prefer C over C++—really, the only things I miss are destructors (local automatic memory management is nice) and templates (generic/type-level programming is nice), but they’re not critical for the kinds of stuff I use C for anyway.


> That’s why I like languages like Haskell that by default prevent me from doing anything I’m probably going to regret later

    head []
(Not that I disagree with what you want, just that even Haskell has made some mistakes :-P)


Nah, that’s fair. However, here’s a neat trick—look at the type of “head”:

    ∀a. [a] → a
Viewed as a logical formula, it’s not a tautology, so I know it must be partial. Proof:

    Given: L(a) = μr. 1 + (a × r)

    ∀a. L(a) → a
    = [definition of L]
    ∀a. (μr. 1 + (a × r)) → a
    = [unroll μ]
    ∀a. (1 + (a × μr. 1 + (a × r))) → a
    = [∀xyz. (x + y) → z ⇔ (x → z) × (y → z)]
    ∀a. (1 → a) × ((a × μr. 1 + (a × r)) → a)
    = [(∀a. 1 → a) = 0]
    ∀a. 0 × ((a × μr. 1 + (a × r)) → a)
    = [∀x. 0 × x = 0]
    ∀a. 0
    =
    0
Unfortunately, this is also false if a is false (void/bottom), and all those “→ a”s are actually “→ ¬¬a” because Haskell isn’t a total language—with that pesky intuitionistic double-negative, any function can be partial. But it’s a good rule of thumb for quickly checking whether a function must not return for some (or all) inputs.


In C++, you don’t pay for features you dont use. Linux kernel has virtual calls for file systems via function pointers. That’s polymorphism. Why not let the compiler handle that?


You misunderstood on the meta level. My point was that you want to use one feature of C++, you enable C++ to use this feature. Developer B wants to use ten other features. Developers C..Z want to use different features. Every feature by itself makes some sense in its context but in the end you're using so many different C++ features that no single developer knows them all. If you use modern C++ there are some ugly edge cases you have no idea they exist and if you use older C++ styles there is hell waiting for you.

The hard part is to select which features of C++ to use and more importantly which NOT to use and to ENFORCE these rules.


> The hard part is to select which features of C++ to use and more importantly which NOT to use and to ENFORCE these rules.

Maybe I've just not kept up properly, but I find it surprising that there isn't a tool which can enforce this already. It seems so obviously desirable.

I guess clang-tidy + custom "checks" + CI server enforcement could achieve something similar, but I'm just surprised that it doesn't exist.

(Of course, many of the useful metaprogramming libraries for C++, e.g. Boost::Hana, use a lot of weird features of C++ to achieve their goal of being simpler to use for clients of the library. I suppose it might be possible to limit the warnings to "self-implemented" functionality, but it might be tricky...)


C natively supports type-safe function pointers -- there is no ugly hacking or boilerplate involved in that. I.e., the C compiler handles that. So what's the advantage of wrapping a class around it?


No it doesn't. Or yes, it does, but we're talking about virtual method calls, emulated with function pointers. This is an argument people always make here in the kernel C++ debates or the GNOME vs KDE debates (GNOME is C and GTK, KDE is C++ with Qt)

What needs to be happening, to avoid crashes and get correct functioning:

1) virtual method tables need to be allocated (otherwise you might be getting a function pointer from unallocated memory and calling it. Good luck surviving that. Bugs like this have happened in both kernel and GNOME).

1b) Pointers to the virtual method tables need to be correctly set upon every allocation of the struct that contains them. So you lose the ability to create an instance without constructing it. (again plenty of bugs)

2) every level of the inheritance hierarchy need to fill in the function pointers in the correct order (You can't count the number of bugs of this type in GNOME).

3) the pointers for same functions need to be at the same memory location for every object (and the corollary. Function pointers for different functions cannot, at any point of the inheritance tree, have the same location). This leads directly to the kernel datastructures, where everybody is utterly terrified of changing anything or even reordering fields. And sadly, that fear is there for good reason.

4) 1, 2 and 3 need to be redone (best from scratch) any time the inheritance hierarchy changes. And of course, need to be done correctly, so you really ought to erase the whole thing in the whole inheritance (and somehow tell out-of-tree developers to do the changes on their end), but in C nobody does this because of the amount of work involved. Then ... bugs happen.

5) May God help you if you modprobe a module across one of these changes without recompiling it. In other words, any change to the inheritance hierarchy risks making out-of-tree modules deathtraps (such changes require modifying the source to the out-of-tree module and recompiling)

In C++

a) put "virtual" in front of the function you want to work across the inheritance hierarchy

b) (optional) don't remove or reorder virtual functions in versions where you want to maintain binary compatibility (caveat: not in the functions themselves, and of course, not in any data structures they might use) (in short: anything out-of-tree needs to be recompiled)

Given that KDE maintains binary compatibility across major versions b) is not optional within that project (with a bit of a cleanup at avery major version). Except for DCOP. But if you recompile from scratch for every deployment like every large C++ shop, you can just outright ignore it.

The result of this is that the lookup tables can be compiled into programs. This works ridiculously fast. A virtual function call in C++ is one indirection. Not even one extra instruction (just a much more expensive one than the one you'd ordinarily use).

In Java, if I remember the last time I checked it was ~20.000 instructions (but can be compiled out by the JIT compiler ... eventually. Then it's still ~1000). And I assure you Java is a lot more efficient at this than any scripting language.

This is the usual difference between C and C++. In C++ you get all the advantages that lots of manual work gets you in C, at the same runtime cost.

The criticism is that if you give inexperienced programmers lots of high level tools, they will quickly use it to blindly generate programs that are 20G+ of machine code. And ... well that's true. Fixing it can be pretty hard.

The arguments of people like Linus are essentially that it's a good thing that people go through and redo the low-level stuff regularly. It's a lot of work but at times you find problems and inefficiencies. I do agree with that, but sadly, I find it pretty hard to assemble a 2K+ member team that I don't have to pay. So work that programmers don't have to do is a win for me.


I agree about constexpr, templates and exceptions.

But C function pointers are not really emulating virtual functions. They're better and more powerful than virtual functions.


> But C function pointers are not really emulating virtual functions. They're better and more powerful than virtual functions.

"Better" is a value judgment, so I'm not sure about that, but...

AFAICT they're more like prototypal inheritance than the inheritance model of C++.


They're not really "inheritance". They can do what inheritance can do, and much much more.


If that's your argument, then I'd say that C function pointers are absolute shit compared to LISP continuations.

How would do I do this in C ?

  int some_function() {
    // some processing
    fp = package_the_rest_of_this_function_into_a_function_pointer()

    do_some_background_processing_and_callback(fp);

    // all the background processing is done
  }
In LISP, this is easy, in fact, in most LISPs, it's builtin.


Of course that's true. But Lisp is shitty at controlling memory use, indirections, and manual MM.


Which is why I said "more like".

Can you give examples of this "much much more" you're referring to?


The lesson here is that if you're designing an operating system, you should also be designing/evolving the programming language to go alongside it. The original authors of Unix did this (developing C in parallel), and so have many more obscure OSes. This could have been written simply as 'max()' with appropriate modifications to the compiler to make it do the right thing.


Things have changed over the years. Even if the developers have patched gcc, or "own" gcc, linux still have to be built with gcc 4.4 otherwise many distro will have problems, so it does not solve the problem.

Moreover, change the semantics of C as you have suggested will break other non-linux-kernele existing programs. There may exists programs depending on the K&R version of max with side effect. Who nows?


I think asking the GCC developers for a builtin form of __is_constant would be a sensible step, so that the worst of those macros can eventually be retired.


I'm 100% on board with this. I think big low-level projects, including AAA games (my field) would benefit by treating the compiler exactly like a dependency they have the source for. We might stumble on very good language evolutions as a side effect.


Don't we risk losing something if everyone is using their own dialect of the language?


I don't think so. But you're safe, nobody will do this.


Yes.

IDE support, debugging, instrumentation, profiling, portability, community libraries.


This is why C++ is so nice. Constexpr fits the bill perfectly instead of using these non-portable hacks.


> In fact, in Linux we did try C++ once already, back in 1992.

> It sucks. Trust me - writing kernel code in C++ is a BLOODY STUPID IDEA.

https://lkml.org/lkml/2004/1/20/20


A 14-year old opinion based on a 26-year old experiment.

C++ has evolved tremendously since. There are better arguments.

(and I say this as a C developer who doesn't believe C++ is the right answer)


With no solid ABI it is unlikely.


Is that a likely problem for the kernel, actually? Given that there's no API stability, and the fact that syscalls are an explicit ABI anyway, I don't see how that'd matter for linux?

(not that I'm arguing for C++ in linux or such)


When using C++, you also need to take into account the compiler ABI.


Does that really matter in the context of the Linux kernel, seeing as GCC is the only supported compiler currently?


Sorry, Just my sour grapes.

Recently created a third party SDK DLL in c++ and could not pass c++ types natively (unless you coupled to compiler and maybe version of compiler).


This is C++ on Windows for you. On Linux, most compilers follow the ABI that GCC created, which has become very stable. I actually cannot name a C++ compiler that doesn't follow that ABI.


Actually, Linux nowadays uses the Itanium C++ ABI (even on non-IA64 architectures), and AFAIK it was created by Intel, not GCC.


I stand corrected. Indeed, the Itanium ABI defined by Intel is the base.

The important part is that it is used by most, if not all, C++ compilers on that platform.


Which is why any Windows developer that wants to export OO libraries uses COM instead, with the benefit that any Windows compiler that can speak COM is able to use the library.

UWP now even supports generics and actual inheritance.


Kernel modules?


One can use the C ABI with C++.


How does that work with templates, overloading, operators, etc?


They aren't exposed to ABI surfaces. Only C compatible stuff would be.


Cf. e.g. Microsoft's CRT implementation, which uses C++ by now, but still exposes a C API.


On Windows that ABI is COM, on Windows 10 it was updated to UWP and is quite feature rich, even if it doesn't support 100% C++ features.


This was 14 years ago. In that time

* C++ has advanced considerably, and features like constexpr, lambas, unique_ptr, etc make writing code easier without runtime overhead.

* The C Compiler that Linux uses, is now written in C++

* Clang tidy checks make it relatively easy to keep problematic C++ features out.

The fact that C++ is largely a superset of C means that just like GCC did, there can be a gradual migration of code.


C++ has advanced considerably

...and gained considerable complexity in the process. (I'm aware that C has gained some too, but not to the extent of C++.)


It's a true statement that C++ has gained considerable complexity since those days, largely due to the C++ committee's very religious approach to compatibility. How that added complexity has influenced how complex C++ applications must be is a discussion with considerably more nuance.


C language is simple, and the result is this considerably complex piece of implementation instead.


And I'd wager the Linux kernel has become more complex in the past couple decades too. But so what?


I'd argue: Yes, this is true. But you need at least on stable part in the toolchain for the kernel to get reliable results. You can't have your hammer transform to a screwdriver just because you swing it in the other direction. And if there is such a tool/case, you need to be absolutely sure how and when it behaves this way, on every possible platform.

This is not the case for evolving C++.


To me this situation is more more like having a screwdriver that can only be screwed with hammers from a particular manufacturer (Linux -> GCC), because that manufacturer is the only one that makes hammers with screw-driving tips that work with that particular screw head (non-portable macros for type-checking or whatever goes on there). You might as well just get a normal screwdriver (C++ compiler) and a normal screw (C++ program) like the rest of the world does -- and there's no implication that you have to keep changing screw tips (-std=c++??) every time Apple (standards committee) decides to come up with a new format (C++17).


If you are going to argue that, then you need to look again at the headlined item, where it was not the case that it behaved the same on every possible platform. It is a non-standard mechanism of one compiler, that was picked based upon a compromise decision about what versions even of that one compiler it would actually work with.


The problem is that you can't always require people to use the latest bleeding edge version to build your code. Especially the Linux kernel has quite some requirements for backwards compat.

And yes, the fact that abi stability isn't really a thing in c++ land is a huge problem too. It has been tried before with beos and it sucked.


Look up the source code for "max" in C++. It's huge.


This?

    template<typename _Tp>
      _GLIBCXX14_CONSTEXPR
      inline const _Tp&
      max(const _Tp& __a, const _Tp& __b)
      {
        // concept requirements
        __glibcxx_function_requires(_LessThanComparableConcept<_Tp>)
        //return  __a < __b ? __b : __a;
        if (__a < __b)
          return __b;
        return __a;
      }
I mean, there are more overloads, but it isn't what I'd call huge.


Would that thing return precisely the same output for all possible inputs as the new Linux max()? I would imagine that even if the answer was yes, it would also require quite a few provisos.

To be honest, I'm just a bystander here but I did actually find the C based max() presented easier to appreciate than the C++ one you present here.

The C one looks more like a maths proof: build the argument ... et voila. It also does not have comments inline because it is a sort of comment. The C++ version has two inline comments, one of which is incorrectly formatted (space or no space after indicator) and the second one seems superfluous because it uses code to describe code. The C++ version, after you strip out the comments, looks very concise. If it does the same job then good stuff. After that we are down to whether x,y is a better choice than a,b. On balance I prefer a,b for arbitrary inputs and x,y etc for functional dependent - so the C++ example wins here for me.


> The C++ version has two inline comments, one of which is incorrectly formatted (space or no space after indicator)

It’s not unusual to use a space after // for comments and no space for code that has been commented out.


> The C++ version has two inline comments, one of which is incorrectly formatted

This seems like the kind of thing which is easily corrected and probably not relevant for determining which is the better approach.


To be pedantic, note that is not the "C++ version" of max(). This is a particular STL's implementation of max(). Different STL implementations may approach it somewhat differently.

In practice, the style for writing STL code is often quite ugly (sometimes deliberately ugly), for reasons which aren't really interesting to most people.


It is interesting, it is related to this downside of C++ that templates have to be defined in header files, instead of say being compiled in some kind of module.


Yes it is easily fixed but this is a lack of attention to detail in a fundamental area. I would imagine that many, many people have seen that code at some point and yet have not bothered to fix up a glaringly obvious formatting error. That may say more about people than the quality of the code but it looks awful to an outsider (like me).

That chunk of code is one of the fundamental building blocks of C++. When presented as exhibit A it should look good. To an outsider that sort of thing looks bad, even if the presented thing is correct the silly error will be obsessed over - as here.

If something is important enough, and I think that C++ is one of those, then for $DEITYS sake fix the obvious stuff before having to be an apologist for it. A simple sed script run decades ago would have done that.

The reason I am labouring this point is that one day it wont be a discussion on HN debating this sort of bollocks but something involving lots of dosh. If S&M ever realised what source code really looked like - they'd have a fit. The world turns ...


I'm going to have to disagree that a missing space in a comment is a fundamental flaw.


Argh, I deserve my portion of lashes, but it is so clearly standing in my experience: most coders who don’t care of syntactical purity like formatting, indenting, consistency, meaningfulness etc. — they have less awareness and make more errors than those who do. It is not a strict rule, but a sign of weakness in that regard.

  }
  int foo(){
    for(x=1; x<len*2;x++){//over 2 array len
    ...(no empty lines for next hour)
I know I’m clinical perfectionist, but try to see it behind my complaint. Our brain can process only few things in parallel. If you overload it with parsing and do not provide hints like grouping and/or formatting, you occupy a couple of threads for a complete bs. As if programming was not one of the hardest things already.


Have you considered that it might be intentional?


I agree. This looks to me like a commented out source line, not a comment that describes what comes next using a similar code expression.

It makes sense to drop the space before the start of the commented code because that way you can just delete the // and obtain the original source line.


You are upset about comment formatting, but find this easier to appreciate?

  ...
  #define __is_constant(x) \
      (sizeof(int) == sizeof(*(1 ? ((void*)((long)(x) * 0l)) : (int*)1)))
  ...
How many C programmers would even understand what that code does, without an explanation?


And it is not even standard C, but uses GCC extensions.


Does this coerce distinct type a and b, or just fail if they are different types? The C one will evaluate the comparison / trinary expression via the usual integer promotion rules.


The C++ version does not promote types. The language is generally much stricter about implicit type conversions, so this is very much in line with the language design.


For Linux kernel purposes the following one-liner would suffice

    template<class T> constexpr T max(T l, T r) {return l < r ? r : l;}


And this short snippet in D

    T max(T)(T a, T b) { return a < b ? b : a; }
without requiring explicit constexpr annotations and with less noisy template syntax ;).


Does it compile the Linux kernel? ;)


The C version works with distinct types for a and b. Does this?


No it doesn't, which is one of the major advantages it has over the C version; you have to do an explicit cast when the args are different types (almost always a potential source of subtle bugs).

Or what edflsafoiewq said: https://news.ycombinator.com/item?id=16721559

It is however trivial to write a version that works with distinct types, if that were really what you want.

  template<class L, class R>
  constexpr std::common_type_t<L, R> max(L l, R r) {return l < r ? r : l;}


Assuming the single template argument version. If T has an implicit constructor taking type R as argument, wouldn't the second argument into max of type R be implicitly converted to T before getting passed to the max function? So there is no need for a special version taking both L and R?

This is a property of c++ implicit constructor rules, it is not unique to this function. In most cases this is something you want to avoid but for integer promotion it can some times be useful.


Nope, template type deduction never implicitly converts: https://godbolt.org/g/stZBK1


Oh, "conflicting types for parameter 'T' ('T2' vs. 'T1')". I'm surprised but it's a nice surprise. Finally something in c++ where safety is valued more than implicit dangerous magic.


C++'s type system has always been stricter than C's. Even the following, totally valid C program, is ill-formed C++

    int* x = malloc(5 * sizeof (int));
because in C++ you cannot implicitly cast a void pointer.


You'll have to give the template parameter explicitly, like

    max<int>(-1, 2u) // => 2


A difference is that the linux macro works for non-constant expressions as well.

All the crazyness is in order to support both constant expressions (and keep them constant expressions) and non-constant expressions with the same max() macro.


That version does this. From cppreference (emphasis mine):

>The constexpr specifier declares that it is possible to evaluate the value of the function or variable at compile time. Such variables and functions can then be used where only compile time constant expressions are allowed (provided that appropriate function arguments are given).


That one-liner I posted works with both constant and non-constant expressions (and in fact is more likely to be evaluated at compile-time than the macro), is type-safe, and has no side effects assuming both arguments are arithmetic types or pointers (like all uses of `max` in the Linux kernel / C-language programs).


In C++, all the craziness and brittleness you find in the "C version" is actually encompassed entirely within the `constexpr` keyword. The compiler gets to deal with a lot of complexity to make that possible, but for users it is genuinely "that easy".


In libcxx it's a few lines. Unless you also count the overloads for initialiser_list etc.


What exactly is the typecheck macro doing here? I get that it compares the sizes of two pointer values but I don't know why and I feel like there is some C standard nuance involved here that I don't understand. Also, is the `sizeof` used to force evaluation at compile time?


The type of == is always int, so the sizeof is sizeof(int), and the !! makes the result always be 1 (true). The entire purpose of the macro is to have the compiler warn if the types are incompatible.


Thanks! However, I still don't understand this completely. So the sizeof is there just to force evaluation of the comparison and because == can only be used among compatible types the compiler would warn? Why is the !! necessary then wouldn't sizeof(int) be enough as a "true" value?


It's to have the compiler produce a warning if the types aren't the same. __cmp_once produces such a warning, and they don't want the warning to go away if it decides to use plain __cmp instead of __cmp_once.

It doesn't "do" anything otherwise; it always evaluates to "1".


Doesn't the compiler produce a warning by default (on comparing a `char` and `int`, for example) when using -Wall?


I’m surprised this is necessary at all. Wouldn’t a compiler at any reasonable optimisation level optimise __cmp_once into __cmp automatically (and then into the resulting max constant) if used with constants? Seems like very basic constant propagation.


As explained in the article linked at the top ("LWN recently looked at the kernel's max() macro"), the issue isn't one of optimization, it's one of syntax. Sure, a decently optimizing compiler can optimize __cmp_once in to a constant value. However, C99 requires that non-VLA arrays have a size that is a constant expression, which is a syntax thing; even though GCC knows that __cmp_once evaluates to a constant value, the resulting C coming out of the preprocessor is of the wrong syntactic shape.


The problem is cmp and cmp_once are actually different if x and y are function calls. In that case cmp calls the functions twice.


I think you've misread the question. Grandparent isn't asking why use cmp_once at all; they're asking, why not always force the single evaluation of x and y (i.e., always use cmp_once and let the compiler optimizer reduce the resulting expression)? A responsive answer is given in https://news.ycombinator.com/item?id=16720118 .


I would like to use a language that doesn’t involve sizeof(typeof...) and template(Tmagic...) both.

  #include “meta.h”

  @tr max(@ta a, @tb b)
    if (is_comparable(ta, tb))
      tr = common_base(ta, tb, optionshere...)
      produce_code {...}
    else
      compile_error “incompatible types in $(__func__)”
I can’t figure out for decades why can’t we just get all ideas from lisp and code in happiness.


Lisp has hard to read syntax and no static type system, so not really something to imitate.


I think I should stress out that my hope is not to use lisp instead of our languages of choice, but to simplify metaprogramming tasks in them with at least equally simpler approaches that lisp has.

E.g. all C constructs can be described in a set of structs. IfStmt, CallStmt, BinopExpr, etc. Generating or inspecting these on the fly could allow to create hundreds of metaprogramming frameworks, few of which could prove their best fit. But instead we locked in ugly cpp macros and C++ templates that even seasoned haskell monader hardly understands and has no tools to explain. Debugging is hard and manual, metadebugging is not even a thing, cppcheck and other code analysis tools are enormously complex and rare, apple ARC is a propietary language feature for NSObject instead of two pages of metacode available to anyone with an “int rc” in their struct. I hope that snippet above is now more clear.


> Lisp has hard to read syntax and no static type system,

The first is subjective, and there are Lisp-family languages with static typing.


That looks very similar to an Elixir macro (heavily influenced by Lisp macros). If you haven't checked that language out, you might find yourself liking it.


For an explanation of __is_constant(), see the thread where it was suggested: https://lkml.org/lkml/2018/3/20/845 (this was also indirectly linked to in the article)

tl;dr: If one of the expressions in a ternary operator is a null pointer constant, the result type is that of the other expression.


This in cpp:

    template<class T> 
    constexpr const T& max(const T& a, const T& b) {
        return a>b ? a : b;
    }


Edit: apologies, I'm blind; never mind. I thought I double-checked but the use of > instead of < tripped me up.


Stepanov disagrees. You should return the second parameter if a, b are equivalent (and you should use < anyway):

  template<typename T>
  inline constexpr const T& max(const T& a, const T& b) {
     return (b < a ? a : b);
  }


What’s the reasoning behind this?


I think to be comparable in c++ types are required to implement < operator and the same is exclusively used by the stdlib.


If you sort2(a, b), it would be nice if min(a, b) == a && max(a, b) == b.


The C version works with distinct types for a and b. Does this?


That's not a feature, it's more bug prone.


This one works too, if they are implicitly convertible (e.g. max(1, 2.0))


No it won't. You'll get a compile time error, as T could either deduce to int or double. Unless you explicitly specify it as max<int>(1, 2.0) or max<double>(1, 2.0).


I’m not up to date on GCC macros, so I’m not familiar with this syntax:

    #define __cmp_once(x, y, op) ({	\
        typeof(x) __x = (x);	\
        typeof(y) __y = (y);	\
        __cmp(__x, __y, op); })

Is that a block-expression, or what?


Ok, I found the doc for GCC Statement Expressions. Seems very Ruby-like.

https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html


What's the advantage of ever using cmp over just always using cmp_once?


There's an answer in another thread: https://news.ycombinator.com/item?id=16720118


I still don't get it.

If it can't tell that the cmp_once is a constant expression why can it tell that cmp is a constant expression?

EDIT: I think I get it. In the event that the arguments are constant, something about the typeof business prevents it from being constant, so that's no good if required in a context where a constant is required. But, when not required to be a constant expression, you want to enforce single-evaluation of the arguments, so then cmp_once is used.


I think it's the ({ }) bit (or perhaps the typeofs) that turns it from a constant expression to a constant value, but I'm not really sure. But yeah, something about cmp_once makes it the wrong kind of syntax.


If foo and bar are functions,

  foo() > bar() ? foo() : bar()
can have a different result than

  int __x = foo();
  int __y = bar();
  __x > __y ? __x : __y


I think you've misread the question. Grandparent isn't asking why use cmp_once at all; they're asking, why not always force the single evaluation of x and y (i.e., always use cmp_once)?


Ah, that's because cmp_once doesn't yield a constant even when the inputs are constant. The reason that matters is there are some arrays in the kernel declared like

  int things[max(size1, size2)];
and there's been recently a push to remove all VLAs (arrays with non constant size) from the kernel. The -Wvla flag was warning about these arrays even though they're actually constant size, so something needed to be done.


Yep. https://news.ycombinator.com/item?id=16720118 is another good explanation :-).


I'm really confused by the 1s in this macro. What are they, syntactically? The second one looks like it could be being cast to the type "pointer to y" but the first one has the sizeof expression in front of it.

   #define __typecheck(x, y) \
		(!!(sizeof((typeof(x)*)1 == (typeof(y)*)1)))


Both are being cast - the brackets are a little confusing, but the sizeof expression is for the result of the comparison.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: