#define add(a, b) _Generic(a, int: addi, char*: adds)(a, b)
Just because you can doesn't mean you should. Someday, someone will have to read that code. Note that when "add" returns a string, it's done an allocation, but for the integer case, it hasn't called "malloc". Now you'll now need to know the type to get rid of it.
Back in C99, generic trig functions were put in to make the number-crunching people happy. After all, FORTRAN has them. Everybody was tired of different versions of math libraries for "float" and "double". Then in C11, the mechanism behind that, "_Generic", was exposed. Now, anybody can use it. But, probably, they shouldn't. [1]
The fact that it's new doesn't mean it's in bad taste.
GCC has had a similar mechanism in C for decades. It was experience with that mechanism, and similar extensions in other compilers, that led to it being in the C Standard.
This whole _Generic stuff is a mistake. There is already a descendant dialect of C which has nice overloading, namely C++.
Analogy: imagine some people aren't happy with the new Fortran. Fortran 66 is the best language ever, and the only flavor of Fortran anyone should use ... except: they like a couple of features from Fortran 2008 and Fortran 90.
So, they whip out the Fortran 66 standard and fork it to create "Fortran Classic" where they just adopt what they like from mainstream Fortran --- and in a totally incompatible way!
But the OP title is misleading, this isn't function overloading at all. It's macro overloading. Subtle but important difference. For example, you can't pass the OP's "add" around as a function pointer. You can't declare its prototype in an outside file (leaving the linker to link it in) without putting its full definition in said outside file. Etc.
It also gives, IMO, better error messages than C++. If we take OP's code and change a and b from int to long, we get:
test.c:20:24: error: controlling expression type 'long' not compatible with any generic association type
printf("%d\n", add(a, b)); // 3
^
test.c:16:28: note: expanded from macro 'add'
#define add(a, b) _Generic(a, int: addi, char*: adds)(a, b)
This is better than the situation in C++ where it would say the function does not exist, and then go through every function with that name (possibly quite a lot) and say why each one doesn't work.
I want to pass a function as a pointer even though it is overloaded. C++ supports that. For instance if we do:
int (*p)(int, int) = add;
overload resolution kicks in there.
I want detailed error messages about overload resolution. Maybe not all the time: give me the executive summary ("no suitable overload of add was found") and the details if I specify some "-Woverload-detailed" option or whatever. That's a quality of implementation matter; neither ISO C nor ISO C++ spell out what diagnostics should look like and how detailed they should be.
Token-level macros are hacky cruft that shouldn't be used as the development bed for new features, period. All this stuff ignores 35 years of progress in the C dialect scene alone, not to mention 60 years of progress in general.
Above, add isn't even what is understood in computing as a "first class function". If we make it into some hacky macro, we are taking away even that second class status away from it.
> For example, you can't pass the OP's "add" around as a function pointer.
To be fair, in C++ you have to say which overload you want to point a function pointer at. So the fact that you have to specify the real function you want to point at, in C, doesn't seem unreasonable.
No, you don't have to say that in C++ and there isn't any way to, in fact.
// Well, you do say it ... with type!
int (*ptr)(int, int) = add;
This will take the address of add(int, int) and not some other add like add(double, double). There is no explicit way to refer to one or the other.
You're right in that we can't point to the collection of add functions as such. There is no generic function add of which the overloads are its implementations, such that we can have a pointer to this generic add aggregate and have ptr(a, b) be subject to overload resolution.
I think that if you want to be able to construct an object ptr out of a set of plain functions, such that ptr(a, b) chooses among those functions based on arguments, that may be doable. Just not with a plain function pointer.
You seem to have changed your mind while writing your response, and you appear to have come to the same conclusion I did.
Personally, when I learned about _Generic, I wondered why the C committee hadn't just adopted C++-style overloading. I'm convinced the reason is to avoid name mangling. The end result is that the programmer does the name mangling
"by hand," with natural consequences for doing so.
However, there are some important points, which make _Generic both more flexible than C++ overloading in some ways, and less flexible in other ways:
* _Generic only dispatches based on a single parameter.
* _Generic doesn't have to expand to a function call; it could expand to a format string, for instance (although if a macro, FOO, expands to a string literal you can't put FOO(X) next to another string literal and expect the compiler to join the two strings).
* _Generic doesn't consider type conversions when doing the dispatching.
* _Generic dispatching can be as complex as the programmer wishes (e.g., _Generic(X), switch (sizeof(X)) ...); yes, you start with type, but you don't have to limit yourself to exact matches.
* _Generic can have a "default:" branch; this is possible in C++ but not obvious (have one overload take "..." to guarantee it will be the worst match, and only chosen when there is no other option).
C++ is not just a newer version of C with newer features. It is a completely disparate language. That's not a bad thing (tradition is not always good), but to say that if you want generics, you should just leave C behind is a little misguided.
It's not just a couple. There are a large variety of things in C that Cxx does not support at all (e.g., VLAs and _Generic), and many more things that, though they are in both languages behave very differently (e.g., sizeof).
It's fine for people to like Cxx, but I do wish we could stop perpetuating the falsehood that C is just a subset of Cxx. Cxx, when it started was a pure superset of C, but that is no longer the case.
Alright, I'll rephrase. There are a zillion differences, but with a couple of minor exceptions, there is very little C++-incompatible C that cannot be straightforwardly expressed as C++-compatible C. I mean, gcc -Wall even turns on -Wc++11-compat and -Wc++14-compat now.
Also, sizeof doesn't behave differently, the arguments that get passed to sizeof are different in a few cases, e.g. sizeof('x'). See Annex C:
However, with all the "great" features of C++ like templates, RAII, and overloading, programmers use and abuse every corner of them to build overly complicated ways to basically create domain-specific languages. It happens in over half of the C++ libraries I've used, otherwise they would just make their API in C.
This is like going back to y2k and saying 'well if you want double-slash comments you should use the version of c that already has it, namely c++98, and it was a mistake to add it to c'
The only way you can use double slash comments without a C version that already has them is to preprocess your code with some text filter (thereby creating your own dialect which has them).
In the 1990's, dialects of C that had // comments were C++, various dialects accepted by C compilers having // comments as a nonconforming extension over C90, and C compilers supporting draft ISO C features.
Changing dialects just for the sake of comments is silly; the right choice will usually be "don't use those comments".
In C programming, I still don't use // comments today, to keep my code strictly compatible with C90 (in my opinion, the best C dialect to date). I have been a C++ developer for many years and of course used // comments; I have nothing against them, and in fact wouldn't design a language syntax that doesn't have a "from here to the end of the line" comment feature.
(Text filtering to obtain C features is not unheard of, by the way. The CLISP sources use something called "varbrace": a text processor which provides mixed declarations and statements.)
All of C's revisions retain a coherent philosophy that is incompatible with the philosophy of C++. When C is the right tool, C++ is no more a substitute than Java or C#.
>When C is the right tool, C++ is no more a substitute than Java or C#
When C is the right tool, assuming that it ever truly is, C++ is a vastly better substitute than Java or C#. C++ uses the same machine model as C; Java and C# do not.
C++ incorporates nearly all of C99 directly, save for VLAs, restrict, designated initializers, and a lot of small stuff that won't affect any modern-ish C code (e.g., no implicit int). VLAs and restrict are explicitly not in C++ because they're somewhat broken (C++14 did add "dynamic arrays" which are VLA-lite, and there's work on a better restrict-like functionality, cf. https://isocpp.org/files/papers/N3988.pdf), while designated initializers does not mesh well with C++ semantics.
In practice, then, C++ without RTTI, exceptions, or the standard library is basically a strict superset of C and C code compiled as such will tend to be legal and identical in semantics. The same cannot be said for C# or Java. Given that C++ has some rather useful features (most notably, the ability to guarantee cleanup even in the face of inattentive developers), the only real valid reason to not be using C++ instead of C is if there's no C++ compiler.
I find the lack of implicit conversion from void * really annoying. Any time I see someone cast the return type of malloc in a .c file, I curse C++ and die a little. This sounds like a trivial thing but it very much changes the feel of the language, namely into one where you need to insert pointless casts to satisfy a whiney compiler.
>it very much changes the feel of the language, namely into one where you need to insert pointless casts to satisfy a whiney compiler
You could use the exact same argument against any type-safety feature: inserting "pointless" forward declarations, inserting "pointless" const qualifiers, etc., to "satisfy a whiney compiler."
The thing is, implicit conversion from void* is plainly unsound, type theoretically speaking. It has certain advantages, but is not necessary in order to have those advantages. For example, it makes use of malloc easier, but one could also make the vast majority of uses of malloc easier with something like
in C++, and it's even straightforward to add additional logic like overflow detection. Mark it inline and stick it in a header, and it's effectively the same as the above ALLOC macro, except that it actually works. Now, contrast
int* array = malloc(256*sizeof(int));
with
int* array = alloc<int> (256);
The C++ expression is simpler. Now, if you change the type of the array, but forget to change the allocation expression (for whatever reason; maybe you're using the absurd convention of declaring all your variables before you start initializing and using them):
You get a type error with the C++ version, and the C version silently compiles. Not too much of an issue in this case -- you'll just waste a bunch of memory -- but go from char to int instead of int to char, and now you've got buffer overflows.
As for the argument made elsewhere in this thread that explicitly casting the result of malloc in C can mask the fact that malloc has been implicitly delcared, that's a language bug -- one that was fixed more than fifteen years ago.
Yes, I'm glad that you can use templates to re-implement operator new [] in a way that is susceptible to integer multiplication overflows.
In all seriousness though, I really mean it when I say I intellectually understand the C++ fan's response to a number of these issues. It took me a long time of writing C but I do understand the template-nerd habitat, and can acknowledge its strengths and use those tricks when working in a .cpp file. What frustrates me is to see the rigidity with which it is followed, and the failure to understand the world in a different way. There is a different world out there that doesn't mind an implicit conversion from void pointers, and they're not "wrong".
My point isn't about templates or memory allocation (hence the code that doesn't check for overflow or worry about object construction). It's that implicit casts from void* are unnecessary in C++, addressing your complaint that it's annoying not to have them. My point is that the lack of implicit casts from void* should not be frustrating, because there are in what are by far the most common use cases equally convenient ways of doing the job.
>What frustrates me is to see the rigidity with which it is followed, and the failure to understand the world in a different way. There is a different world out there that doesn't mind an implicit conversion from void pointers, and they're not "wrong".
I have seen and lived in the world where casts from void* are common, and where making those casts implicit is a convenience (and that world is not the world of C++). I understand that world. I understand that viewpoint. I just strongly disagree with it.
> It's that implicit casts from void* are unnecessary in C++
Very well aware of the C++-head's answer to these problems.
> My point is that the lack of implicit casts from void* should not be frustrating,
This was a discussion about whether or not C++ is a superset of C. I wrote up the difference that frustrates me the most and all other comments were missing.
Why programmers don't always use calloc: good question.
Firstly, anyone who does mallocs a block, and then immediately does a memset(block, 0, size) on that block almost certainly should be using calloc. The Linux kernel has kzalloc for this purpose as an alternative to kmalloc and it's fairly widely used.
Sometimes it is inefficient to set something to zero, when that is not the true initial value (some or all of the zeros will be overwritten with the true initial value before the object is used). If you malloc some structure and then painstakingly initialize every member, then calloc is wasteful.
If you malloc some structure, and then forget to initialize some of the members, calloc will hide that bug by initializing them to all zero bits. This might not be the correct value. If the uninitialized parts of the structure aren't clobbered from zeros, you can get an accurate diagnostic about that from a tool like Valgrind. On the other hand, if you aren't using that tool, the behavior with zeros is more deterministic. E.g. if a pointer is not initialized, but overwritten with zeros, on most mainstream platforms that will be a null pointer whose dereferencing uses are detected. If you don't have such a tool available, having the uninitialized bits be all zero is arguably better.
Much of this reasoning mirrors reasoning about whether to initialize a local variable to zero, if its true initialization actually happens later.
In my experience using malloc over calloc make it much harder to find subtle bugs that result from off by one under set errors (i.e. you malloc 100 bytes, only set 99, and then you later read 100 bytes). When you use malloc the 100th byte is filled with random garbage, while with calloc you get something consistent.
Finding bugs where you write 101 bytes are easy to find with tools like Valgrind, but finding bugs where you malloc and don’t initialise the data structure correctly are a real pain.
Probably the best reason to hate it is if you didn't include the right headers and malloc is therefore implicitly defined to return int. That creates a bug on most 64-bit compilers where int is 32 bits. The cast will mask the bug.
But more stylistically, let's be honest, something like this is way too verbose:
I see code like this often, and I think the only explanation I can think of is that people don't know the language well enough to write:
struct foo *bar = malloc(sizeof(*bar));
> A related question is why do people use malloc if calloc is available?
Seems like a very different question... calloc multiplies, calls malloc, and zeros. I guess if you don't want to multiply, don't want to zero, don't want to bother specifying another parameter, some combination of these, then malloc might be a better choice. Though I do recall a few years back the OpenBSD folks (possibly others too?) focusing on making sure calloc checks for overflow when it multiplies. [See also: reallocarray]
Any decent C compiler has a warning for the situation that an external function is called without a prototype declaration having been seen.
GCC also has diagnostics for when a function definition occurs without a declaration.
From GCC man page:
-Wmissing-prototypes (C and Objective-C only)
Warn if a global function is defined without a previous prototype
declaration. This warning is issued even if the definition itself
provides a prototype. The aim is to detect global functions that
fail to be declared in header files.
-Wstrict-prototypes (C and Objective-C only)
Warn if a function is declared or defined without specifying the
argument types. (An old-style function definition is permitted
without a warning if preceded by a declaration which specifies the
argument types.)
-Wimplicit-function-declaration (C and Objective-C only)
Give a warning whenever a function is used before being declared.
In C99 mode (-std=c99 or -std=gnu99), this warning is enabled by
default and it is made into an error by -pedantic-errors. This
warning is also enabled by -Wall.
If you turn on wall you should be warned that you didn’t include the right headers?
I was under the impression that lots of implementations of malloc actually call calloc under the hood. Unless you are calling malloc in a tight loop I can’t see the extra step of zeroing being a issue. In one of my programs where performance is critical I changed over all my mallocs to calloc and couldn’t measure the time change - I did find I had fewer difficult-to-duplicate bugs though.
Edit. Interestingly CERT recommends casting [1]. Seems to solve a much worse issue than forgetting to include stdlib.h
You partially countered the parent's second sentence, but didn't address the first. The real reason to use C over C++ is if you want your codebase to retain the positive effects of following C's coherent philosophy. True, as you say, C++ can be made into almost a superset of C, and thus sticking to a small subset of C++'s additional features is a real possibility - in fact, I think it should be more common. But nobody does that, perhaps because of the difficulty of enforcement, because it would seem aesthetically messy, or perhaps because the people who aren't conservative enough to want to stick with C almost unchanged have mostly switched to C++ "proper". I don't know, but here are some of the reasons I stick with C.
My interpretation of how C benefits from its philosophy:
- Name explicitness: in C, every function in scope at the same time should have a unique name, written out in full. This means that even without much context, such as in a diff (I think there's a quote by Torvalds related to this), or with context but without advanced IDE tools, there's no confusion as to what function is being called. C++ violates this rule starting with the simple feature of method dot syntax - where the namespace for methods depends on the type of the receiver, which may be declared in some totally different location - moving on to overloads, where which overload is selected may depend on several arguments and implicit conversions and defaults and templates, which collectively can be scattered all over the codebase - never mind all the advanced stuff. The resulting code can be more succinct, but is often less clear.
Now, there may be some functions which are just so common to type, and/or which have variants acting on different types that act so similarly, that even a philosophy rejecting overloading in general may want to accept it for them. That's sort of what <tgmath.h> was for, and now _Generic lets you make your own; you don't need the entire C++ template system for that.
(In a sense, C itself violates name explicitness when it comes to struct field names, since different structs can have fields with the same name. Old fashioned C code uses globally unique names for fields, though presumably more due to feature-challenged early compilers than any philosophy. It has the benefit of making it possible to '#define my_field my_union.foo[0]' etc.)
- No implicit function calls: As you mention, destructors are pretty useful as a safety feature, and I think it would be nice to use them in C, but C++'s copy constructors and copy assignment operators and regular constructors and implicit conversions make it very easy to execute some code you didn't really want or need, without indication in the code that something expensive is happening.
For example, implicit copies caused by creating std::strings was noted last year to cause a huge number of unnecessary allocations in Chromium:
- 'Mechanical sympathy': C's inability to override operators like + and [] has downsides (see below), but it does make it more clear what code lowers down to basic machine operations and what code may result in expensive algorithms being run. Compare the performance of + on integers to + on strings.
- Mechanical sympathy regarding code size: C++ templates make it very easy to bloat binary code size for minimal or negative runtime performance gain. Not that not having them at all is better, exactly, but C binaries tend to be a lot smaller...
- Simplicity -> easier to learn. Explicitness -> harder to muddle along with an understanding that's sort of right but not quite, which has upsides and downsides.
- Simplicity -> predictability for advanced users. The C spec is small enough that you can get to a point where you can read C code and almost always know what the standard says about what it should do. There are some confusing parts of the manual that get posted around on the Internet as puzzlers, like integer promotion rules, undefined behavior, sequence points, etc. - and if I were designing a language from scratch, I'd take a hatchet to these sections and pick something easier to understand. But the number of such parts is one or two orders of magnitude lower than in C++. Think function and template overload resolution rules, or the many types of construction, or the many random features which few know about because nobody uses...
- Resembles a "post-OOP" language due to being pre-OOP: these days it seems to be popular for languages to encourage things like:
-> composition over inheritance
-> using more dumb structures to store aggregate data rather than making everything a class with manually written constructors/getters/setters, hidden fields, invariants, etc.
Well, C has no inheritance, and it has long made it easy to use dumb structures. In particular, if you wanted to be able to use a struct value as an expression rather than declaring a separate variable, in C++ you had to use a constructor: 'Foo(a, b, c)' - which means that even if you didn't really want any behavior in your struct, you had to manually write a constructor to forward the parameters to the corresponding fields. In C99, you could write (struct foo) {a, b, c} without any boilerplate. Now in C++11 there is Foo { a, b, c } - which solves this problem at the expense of making initialization rules even more complicated.
And some of the biggest elements that are less about philosophy than lack of coherent design on C++'s part:
- The entire development of the template system as a metaprogramming tool, using a sort of purely functional pseudo-language, is just awful. The whole thing is incredibly complex not as a requirement to provide sufficient power, but because it evolved out of a feature set that was never intended for that use case - in fact, is famously Turing complete by mistake. SFINAE in particular is a horrible hack that makes you do things like add a default argument that will never be specified, and defaults to a class that doesn't do anything except not exist in some cases, and expect the compiler to just silently go along with this - at least concepts will improve that someday. The template system is also just not as nice as macros for a lot of code-generation-like tasks, despite the enormous amount of effort spent on it, and the standards committee's (justifiable) hate for the latter.
- Move constructors and rvalue references implement some nice functionality, but the way they're bolted on adds a ton of complexity to the type system and more boilerplate to write for your classes, and the magic that makes things like std::forward work is needlessly confusing.
- constexpr is a mess, since a large fraction of functions in general could hypothetically qualify for constexpr, but putting it everywhere would be noisy. There are other issues with it.
- Since C++ classes evolved from C structs, all the fields are specified in the declaration, which goes in the header file, even though some of them are private and logically belong to the implementation. A full list of fields is necessary in C++'s traditional compilation model if you want to make instances on the stack, but many classes are heap-only, where having sizeof be a link-time rather than compile-time constant would be no big deal. This results in unnecessary dependencies on .h files leading to longer compile times. In C you can use forward-declared structs for this in most cases; in C++, achieving "PIMPL" requires the sort of workaround code that lives up to the name.
- Poor compilation speed is mainly caused by putting everything in header files, which in turn is caused by C++ trying to retrofit features that are useful, but whose implementation would properly involve the linker - template specialization and aggressive inlining - into C's compilation unit model. Maybe modules will solve this when they're standardized in C++20 or whatever.
- Another problem caused in part by not involving the linker: the difficulty of C++ ABI compatibility on most platforms. It's naturally trickier than C due to the increased use of library types and the need to match specializations across library boundaries, but it doesn't have to be as hard as it is.
- Another is that C++'s overloadable and namespaceable function names have to go through an ABI-dependent mangling process to generate a fake C-compatible name like "_ZTVSd" - which most people can ignore, but for low-level users can be annoying.
- By the way, a knock-on effect of lack of simplicity is that the same code usually compiles slower in C++ mode than in C mode, though this isn't a big deal.
...I may as well acknowledge why I don't think C is the future:
- Destructors are very useful indeed. So is automatic reference counting: both drastically decrease the annoyance of certain styles of coding.
- Re: 'mechanical sympathy', encouraging raw pointer access and such is nice until you start caring about security, at which point the few extra instructions required to bounds check vector accesses are infinitely less problematic than the alternative; in C++ you can stick them in vector::operator[], not in C.
- C makes it extremely difficult to write efficient container types, due to the lack of templates. It's possible using macros, and I've done it, but the contortions and pitfalls inherent in the implementation of such macros eliminate any kind of aesthetic benefit. This encourages things like the use of raw {pointer, size} pairs over specialized types, which requires obnoxious boilerplate, and dumber data structures rather than smarter (e.g. linked list vs. hash table, when the number of entries is expected to be small - until suddenly it isn't).
- On a related note, templates in general, when used right, make it easy to generate highly efficient specialized code, useful not just for containers but in many situations. It's a blemish to C's reputation for speed that it has more difficulty doing that.
- Both languages lack proper sum types, which are a natural counterpart to structs; unions are nice but explicitly tagging them is entirely gratuitous unsafety.
- Sometimes you really just want short code, and not to have to cross every i and dot every t. UI frameworks are a particularly good example, where there are a large number of knobs to turn, and many more that should be left at their defaults, making it very desirable to (a) minimize the length of code required to turn one and (b) make it as easy as possible to leave things unspecified - encouraging the free use of default parameters and overloads, and making the repetition involved in global name uniqueness look pretty ugly. I think this paradigm can be improved upon in general, but I'm not sure C is the language to do so in.
Still, I don't think C++ is the future either, not even the improved-but-further-bloated newer versions. Nor do any of the other languages in the market do a good job at targeting the niche C currently excels at (including Rust).
When C++ is not a substitute, it's for some reasons having nothing to do with language or philosophy, such as not having a C++ compiler for the given chip.
There are plenty of people who strongly disagree with you (Linus Torvalds being the most famous example), so you are stating a matter of personal taste, not of objective fact.
This is a very simple example. It only checks the type of the first parameter. _Generic can operate on all parameters, it just leads to an exponential increase of mess for each additional parameter handled.
While I like _Generic in C11, there is no doubt that it is not as powerful or handy as Templates (this coming from someone who does not really enjoy Cxx templates).
It's hard to say what's more powerful. Certainly templates involve more computation, but _Generic can do some things templates can't (for good and for ill) and vice-versa.
Well, since templates are turing complete, I would tend to say that they are more powerful. But they work very differently. Templates do code generation and _Generic is just a fancy-pants type-based macro-dispatch.
Like I said, I do not think _Generic is a bad thing (though it can get really messy—which is not to say that Templates can't), just that the example given in the OP is a bit contrived.
It is computationally more powerful, sure. If that's what you'd intended to be talking about, then fine. But that is a distinct notion from "you are able to do more with it" - which is typically the more relevant bit when discussing the power of a feature in a programming language outside a restricted academic context.
(Incidentally, other things being equal - including ability to get done the thing you want to get done - "Turing complete" is a bug, not a feature!)
Expanding a little - "Turing complete" means that your system is capable of computing anything that can be computed in some encoding, but that doesn't mean that it is linked to the rest of the world in a way that will do something useful with that encoding of the result.
Token-manipulating C macros, while obviously of less powerful in the sense of computability, can do things that templates cannot do (name mangling, annotations, compiler pragmas) - and with _Generic, any of these can now be type directed. Of course, with great power comes the responsibility never to use half of that... but that's pretty true of C++ as well :p
I don't program in C, but I can understand most of this code; However, I'm not sure where the "function overloading" part comes in or why it's significant. Could someone explain?
Only library-wise: the reason for _Generic <tgmath.h> is missing, <complex.h> exists but can't be used. Some lesser used and sometimes useful C99 language features/changes are still unimplemented. Even excluding features C11 made optional (for good reasons) like VLA or complex.
Indeed. No need to use "complex" features like RAII/RTTI/exceptions. Generally, I see features like classes, overloading and templates as "compatible" with C-style coding.
Back in C99, generic trig functions were put in to make the number-crunching people happy. After all, FORTRAN has them. Everybody was tired of different versions of math libraries for "float" and "double". Then in C11, the mechanism behind that, "_Generic", was exposed. Now, anybody can use it. But, probably, they shouldn't. [1]
[1] http://www.robertgamble.net/2012/01/c11-generic-selections.h...