Hacker News new | past | comments | ask | show | jobs | submit login
C++17 Features That Will Make Your Code Simpler (fluentcpp.com)
67 points by joboccara on June 27, 2018 | hide | past | favorite | 86 comments



Every time I read about modern C++, I'm amazed at how different the language looks compared to how it was when I last used it heavily, around 10 years ago. It's had amazing progress, it looks way more "modern" and really seems like a brand new language.


While true, many of the modern idioms were already possible in C++98 when you look at what was being done in OWL, VCL, CSet++ and other similar libraries.

I mean, even RAII was already a thing in MS-DOS C++ code bases.

However, many kept using it as if it was C with some extras.


It didn't exactly help that "many" included the platform toolkit authors. MFC made me look at Motif fondly.

Never mind compiler support. I just recently noticed with delight that I finally forgot the exact VC #pragma numbers to disable all the STL warnings. Which gives me hope that one day I'll manage to do the same to ORA-00907.


Yep, agreed.

MFC wasn't something I would list as good C++ GUI examples.

Unfortunately Borland just made a mess of their customer base.

Even today, Visual C++ is not as visual as C++ Builder.


I haven't done an C++ in about 6 years, and that was basically a mix of horrible C++ from the 90's and C. It's nice that that language has evolved. I still prefer Swift (or Rust or Kotlin or even Go for other things). I don't miss C++ at all. C++ used to be the only real choice for high performance code (like games, which that earlier referenced codebase was) but today there are many more choices.


I was using it 20+ years ago (highschool) then used it in my previous startup (10 years ago). Amazing how it keeps evolving and modernizing. auto typing keeps growing, reminds me of Swift.


My biggest complaint with structured bindings is that you have to give a name to every element, so if you're compiling with -Wall, you get a lot of "unused variable" warnings. Similar functionality in other languages lets you use _ to mean "I don't care", but in C++ that's considered just a regular single-character variable name.


Mine, tentatively, is that the binding is made by declaration order, not by name:

   struct { int a; int b; } x { 1, 2 };
   auto [b, a] = x;
Now b is 1 and a is 2, even though in the original structure b was 2 and a was 1. This is contrary to how most other languages with this feature behave.

(I wrote a bit about this once: https://thebreakfastpost.com/2017/11/15/c17-destructuring-bi...)

I haven't had the opportunity to write any C++17 in production yet, so I'm not sure how dangerous this dangerous-looking thing actually is.


This behaviour is the only reasonable one; "auto [b, a] = x;" cannot be considered wrong in any theoretical or practical way.

The b and a in "auto [b,a]" are arbitrary names you are applying to the result of the destructuring assignment, and they are essentially unrelated to the names of the struct members.

If you write "auto [apples, oranges] = x;" it means exactly the same thing as [b,a] and [a,b]: two ints, each with an arbitrary identifier, the first with value x.a and the second with value x.b.

The only unusual and slightly flaky thing that's going on is interpreting a struct as a sequence; the order of struct fields is usually relevant for its memory layout, but not for language features that use identifiers.

What you appear to want is something entirely different: constructing something that automatically mirrors the names of the struct members.

For example Java reflection allows enumerating the fields in a class, getting their names and other metadata, and extracting from an object the value of a field of its class. You could then populate a map with a destructured copy of an object in which each value has the same name as in the original object.


Your reply is written as if you think my objection is absurd - then in the middle of it you concede exactly the thing I'm objecting to, and describe it as "unusual and slightly flaky"!

I do get what you're saying; I can see why it's done this way. C++ lacks tuples at the language level, so there is no other way to do an anonymous tuple-style binding. Being forced to bind to the same name as the structure element would be inconvenient for various reasons. It makes sense. I just think it's problematic that changing the declaration order of a struct can silently break code that uses it. And it is different from other languages - not just natty ones like ML but also mainstream ones like Javascript ES6 - which makes it a point of some curiosity.

> What you appear to want is something entirely different: constructing something that automatically mirrors the names of the struct members.

It's all just a question of declaring and assigning local variables, there would be no mysterious objects or runtime reflection involved in either case.


No, it doesn't make sense. Order coupling between destructuring assignment targets and class fields is very similar to order coupling between formal and actual parameters of functions, which is easier to get wrong and for which no protection exists.

Struct X could have a constructor

   public X(int first,int second):a{first},b{second}
but a caller intending to set a=1 and b=2 could call X(2,1) instead (particularly if last time they checked the constructor was public X(int first,int second):b{first},a{second}). You certainly don't want to prevent this "mistake" by enforcing that the first parameter must be a local variable called a and the second must be a local variable called b and that the constructor parameters must be called a and b too.

This sort of easily detected error is exactly the same as writing [b,a] instead of [a,b] in a deconstructing assignment.

Treating a sequence of fields in a class as a tuple is "unusual and slightly flaky" because few and unimportant languages do it, because it's very new for C++, and because it faces complications like inheritance, not because it is wrong or bad; someone might be slow to adapt, but everyone else will benefit.

In practice, only very plain and well documented structs will be suitable for deconstructing assignment; for example, in code generated from data serialization schemas there is a natural and well known order for class members.


> a caller intending to set a=1 and b=2 could call X(2,1) instead [...] You certainly don't want to prevent this "mistake" by enforcing that the first parameter must be a local variable called a and the second must be a local variable called b and that the constructor parameters must be called a and b too.

An Objective-C programmer might disagree with the last part. But of course it's possible to come up with examples of other things already in C++ that are equally error-prone. That doesn't mean this new thing won't be error-prone too.

I bet that many C++ programmers who are passingly familiar with C++17 would agree that the second line here:

   struct { int a; int b; } x { 1, 2 };
   auto [a, b] = x;
is syntactic sugar for

   auto a = x.a;
   auto b = x.b;
Introductory material about structured bindings routinely uses examples like this, where the names happen to match those in the structure, without making it clear that this is not what actually goes on.

> This sort of easily detected error

"Easily detected" in C++ means the compiler detects it. The rest might as well be Javascript.

> Treating a sequence of fields in a class as a tuple is "unusual and slightly flaky" because few and unimportant languages do it

Can you name one? This is one thing I've been trying to discover here, without success.

> In practice, only very plain and well documented structs will be suitable for deconstructing assignment

I do agree with this -- good practice should be to use it only for "obvious" cases.

I will use this feature, when I get the opportunity. I use element-wise structure initialisation often enough. It's convenient, this will be convenient. It also bites me occasionally and so will this.


> changing the declaration order of a struct can silently break code that uses it

This is terrible for safety. Which is not surprising in a long line of questionable language design choices made for C++.


Is it, though? x.a and x.b, and a and b, name different things. Why should they be correlated? Apart from ML, I which pattern matching works differently, I don't know of any language where destructuring tuples tries to match names rather than position (even in ML, if you pattern match tuples rather than records).


Because this is a struct, not a tuple. Historically at least, I expect C/C++ structure elements to be identified by name rather than by declaration order. Changing the declaration order ideally would not silently change the meaning of code that uses the struct.

I realise that C++11 structure initialisation syntax already blurs this distinction. That's already a bit dangerous in my view, although in both cases I can understand why they chose to do it that way.

Are there other languages in which struct (not tuple) bindings happen by declaration order?


> Historically at least, I expect structure elements to be identified by name rather than by declaration order.

then... just use the struct directly if that's what you want ? But a single binding can map to multiple structs, especially in generic code :

    template<typename T>
    double dist(T point) { 
      auto& [x, y] = point;
      return std::sqrt(x * x + y * y);
    }
should work with T's that look like

    struct Point { float f[2]; }
or

    struct Point { float x, y; }
or

    struct Point { float p1, p2; }
or

    std::tuple<float, float>
or ...


In std::tie assignments you can use std::ignore. Not sure if there is an equivalent for structured bindings.



I'm completely ignorant about C++, so I will ask this question: What's the state of memory management in modern C++? Is it easier to avoid shooting yourself in the foot? Can you write code that is completely safe?

I've always flat-out avoided C++ because I don't think I will be able to handle memory management for a large program; It'll be riddled with security holes and memory leaks.

Unfortunately, many OSS programs I want to contribute to are written in unmanaged languages like C or C++. As far as I'm aware, it's easier to get memory right in modern C++ compared to C, which hasn't received many updates.


Yes, it is possible, specially if you adopt modern idioms.

- Never use C style strings or arrays directly.

- Prefer STL data structures for memory management (unique_ptr , shared_ptr and others)

- Don't use C style enums (class enums are type safe)

- Don't use C casts

- When dealing with C libraries, always wrap them in safe C++ wrappers with safety invariants

- Always enable bounds checking for vectors, strings and iterators on debug builds.

- Always use references for pointer parameters that aren't supposed to be null

- Make effective use of RAII

Basically all boils down to don't program in C++ as if it was C with extras.


>Can you write code that is completely safe?

No. The modern C++ features like std::unique_ptr and std::shared_ptr replace old patterns of "malloc()/free()" and "new/delete". This ensures correct cleanup and avoids leaking memory.

However, there are other topics of "memory safety" such as preventing runtime behavior of buffer overruns and writing to invalid pointers. The virtual machines like Java JVM and C# CLR create a constrained memory area with extra runtime checks and the new C++ language features don't replicate that type of memory safety. Also, those "managed" languages don't expose raw pointers as a 1st class programming feature (e.g. don't use "unsafe" blocks) so that in itself creates an environment of extra memory safety.


"This ensures correct cleanup and avoids leaking memory."

Although shared_ptr does open you up to a whole new class of bugs, with circular references causing memory leaks and object lifetimes being harder to reason about, especially in a multi-threaded environment.

I agree in general, though, things are a lot better.


> I'm completely ignorant about C++, so I will ask this question: What's the state of memory management in modern C++? Is it easier to avoid shooting yourself in the foot? Can you write code that is completely safe?

I would not say completely safe if we only consider the language, but on the rare occurence of a memory bug, usage of asan / ubsan of GCC and Clang (both have different sets of checkers), -D_GLIBCXX_DEBUG and clang-tidy, it's frankly impossible to have this kind of bugs. Just running by default with -fsanitize=address -fsanitize=undefined will help you to trivially eliminate so many bugs it's not even funny.

However, you fight the language much less than in Rust in my experience, and if you don't do memory allocations directly but instead use <vector> and others it will be a breeze.


If binary libraries are part of the build I wouldn't say that "it's frankly impossible to have this kind of bugs".

Sanitizers still have some issues with them.


I am of the opinion that binary distribution should disappear. It's not a problem in 99% of the languages out there, why should it be one in C / C++ ?


Not everyone likes to give their code away for free, or has cluster based build environments.


>Can you write code that is completely safe?

Yes. Just don't use raw pointers. Use unique_ptr, shared_ptr and weak_ptr instead.


smart pointers are a good start, but completely safe is a pretty big exaggeration.

Just off the top of my head smart pointers do not protect against:

- null pointer dereferences

- out of bounds array access

- iterator invalidation

- dangling (non-owning) references



Also, keep things on the stack (C++11 onward has features that allow avoiding copies).


These things don't make it safe in the security sense. They just help against memory leaks.


I don't think you can ever write code that's completely safe then, if nothing else some processor flaw will come along and ruin everything.


While it's true that processor flaws destroy the assumptions that higher level components (such as any/all programming languages) build on, you don't need to go nearly that far to see unsafety in C++, even using only the most modern techniques: use-after-move of many types is undefined behaviour (for instance, dereferencing a std::unique_ptr that has been moved from), and iterator invalidation & dangling references aren't addressed by those smart pointers at all.


The code examples are great but I also want examples of compiler errors when things go wrong.

C++ sort of has a reputation for compilers omitting messages that are, to put it mildly, unhelpful and verbose. While they’ve improved over the years, I’d like to know if new features have new confusing error messages.

Saving 30 seconds writing a one-liner doesn’t really “count” if there’s a chance of seeing a 40 line template-unrolling when deduction fails due to a typo. Does that happen here?


What is the intended use of auto in these cases in a statically typed language? It seems counter-productive because in C++ writing type annotations is an aide to the code reader to understand the code author’s intent. Making the code reader also infer types of constituent variables, and remember those types in their head as they read the surrounding code where some e.g. structured binding was used to assign to some variables — this seems like a very bad thing to do to code readers and you’re gaining hardly anything in terms of terser syntax.

Compared with Python, say, where you always know that the burden of understanding how a variable is used is part of the reader’s responsibility, maybe with conventional help in docstrings or optional type hints, this C++ feature seems like the worst of both worlds. You still have to add some annotation (“auto”) and keep in mental cache an understanding of the types, instead of just being nice and explicit about it and declaring things with the types.

So you don’t get the freedom of duck typing on the auto’d variables, and you trade for slightly reduced syntax instead of explicit annotations for the reader.

It seems like it would immediately become a C++ anti-pattern to use structures binding. It’s cute instead of pragmatic.


Type inference (or type deduction, as it's called in C++) is the number one reason for the resurgence of static typing these days. I for one would find C# and C++ much less pleasant both to write and read without it; so much so that I would probably use a dynamically typed language instead.

Your perspective is that type annotations are there for the reader. I agree with tzahola that type safety itself is valuable in its own right without forcing the user to type little reminders to himself everywhere. Thanks to language servers, I can hover an expression in a text editor and remove all doubt.

I also agree with the other responses that if any language benefits from type deduction, it's C++. No way would the code be more readable if you were forced to be explicit about some of the dependent types in the STL.

That said, you are obviously not alone in your perspective. I've heard the exact same arguments from experienced C# programmers.


It’s funny because for me, I spent well over a decade as a Python engineer. And the thing I find most valuable about statically typed languages is the act of writing type annotations, creating typedefs, etc., as part of the overall design of a program.

Type safety and compilation have never really mattered to me — it was always extremely easy to write safe programs in Python, without crazy indecipherable language grammars or long compile times. I would say having written and maintained large programs in C++ and Haskell, it is no easier to write safe programs in those languages than in Python. The compiler doesn’t save you time by catching bugs, that stuff is all just myth.

But what it does offer you is a way to communicate the design of your program through efficient type annotation, and give a lot of clarity into how things are intended to flow, and exactly at what points certain types of extensibility can be implemented.

I’m sure this can vary by project and personnel, but it has been really true for me. It is so, so easy to write safe, huge programs in Python, and the fear of hitting an uncaught runtime problem after lots of wasted time is misguided.

Meanwhile, the benefit of static typing in Haskell, for example, is that I can create stub functions for everything while working on the right type-level flow of operations.

Then I have a nice end to end skeleton code with expressive application-specific type descriptions as I start to fill in the implementation for the stubs.

And at the end, people can understand the code from reading the flow of type annotations.


There are cases (when writing template libraries) where you don't know the types you'll be using yet, and you need auto to express what you want.

Apart from those (admittedly) rare cases, after a few years using auto, I'm not as opposed as I was. Generally I use auto by default, and replace it with the actual type when I re-visit that code and have to actually think what the type would be. I'd guesstimate that I do that in, maybe 10% of the cases where I initially used auto. Makes code much more readable (as in, 'reading over'), too. Turns out it's only when you try to reason an algorithm through line by line that you actually need the type, and in the other cases (where you're just roughly following along, which I find happens more often than the other case) having 'auto' abstracts away some of the mental overhead of having type declarations.


> “Turns out it's only when you try to reason an algorithm through line by line that you actually need the type,”

Yes, but for other people reading your code, this would be way more often than 10% of the time.

So you save a tiny bit of effort in 90% of cases for you, while other people have to exert more effort in probably 90% of cases for them. It seems like the minute the number of people needing to read and understand that section of code exceeds 2 (you & one other), then this type of use of auto is actually increasing the amount of overall work required linearly for every additional person added in.


I don't really agree. I don't think there is much difference between reading code you wrote a few weeks ago and code someone else wrote - at least not in this context.


We might just have to agree to disagree, but I would disagree with your claim here in the strongest possible terms.

When someone new comes into a project, you can’t assume what skill level they’re at, how they think about code, what design patterns have been most strongly used in the teams they were previously on.

Code you wrote a few weeks ago is still going to be hugely similar to the types of code you often write, the ways you think about design patterns, etc., especially if you’re on a team for a long time.

Code someone else wrote very frequently looks completely alien, especially for new people, or someone from a different team that has to collaborate on one small cross-cutting problem, or people who look at your code long after you left the company.

Using things like auto because it has some benefit as you write the code, in your mind, is deceptive for exactly this reason. Those criteria are hardly ever still relevant when it is someone else reading the code.


Sure. Do you have experience using 'auto'? I'm asking because I once thought exactly like you, but changed my mind after a few years of real-world experience. But of course, YMMV.


Yes. Previously at a proprietary trading company, we refactored a large subset of the code to use auto, and found that people, especially traders and quants, who have to do projects extending that code on a more ad hoc basis, had a drastically harder time reading and reasoning about it, so we rolled back and instead had a company convention to disallow the use of auto.

If the only people reading the code was my team, then it probably would have been fine (though still a silly thing to get rid of or to stop writing new type annotations).

But that’s never the case in real software. Other people needing to quickly grok it is usually concern #1, even higher than performance most of the time (and that stuff powered a real-time trading system).


This a matter of debate in the C++ community. The “almost always auto” camp has the position that it makes the code more flexible and easier to refactor. The fewer times you write a type name the easier it is to change. It also has its uses in template heavy code which is popular on modern C++.

There is a group such as your self that value the extra context the type provides. I think the counter to is that modern tooling can provide this information to you. To split the difference you can use auto just where the type is redundant, like with make_shared, or verbose and unneeded, like with iterators.


While modern tooling is helpful when coding, doing review or even just casual reading of the code with heavy use of auto is awkward especially for parts one has not worked recently. The tooling just not there at all. Sometimes the only choice is to apply the patch and check the code in IDE.

Haskell and even Rust is less problematic in that area as Haskell does not have all those & const specifiers with complex rules of type deduction. And with Rust it is easy to see if a typeless let introduces a reference or copy or move.


> The tooling just not there at all.

how so ? most IDEs are able to tell you what type is behind your auto by a hover.


Try to do that when viewing a diff! And when reading the code I have found that asking IDE for the type distracts from grasping what code is doing.


But reading code or diff of a type inferred language is far from impossible.


It might be easier to change types behind the scenes when using auto, but why would someone feel that is going to be a common case? When are you completely changing types all the time?

More often, the types don’t change all that much, but their behaviors get extended and new features have to be calculated in some block of code. If you can see the type explicitly, then you know what operations it supports, etc.

If I come across a structured binding where x and y are auto assigned as the unpacking of struct fields whose types might be complicated, now I have local variables x and y that I cannot reason about until I backtrack manually to figure out what type they will be deduced to having (and then keep that in memory any time I’m tinkering in that section of code).

I guess for the cases when big structural overhauls are happening behind the scenes to radically change underlying types and designs, auto would reduce a bunch of rework on type annotations. However, I just can’t believe anyone would allow that to be a common enough use case for it to matter more than explicit type annotations.


> If I come across a structured binding where x and y are auto assigned as the unpacking of struct fields whose types might be complicated, now I have local variables x and y that I cannot reason about ...

One is free to not use the auto keyword in a scenario like you mention in order to make the types explicit regardless of what the "Almost Always Auto" camp says.

However there are scenarios where leaving the type out does not degrade readability and sometimes improves readability. Often cited examples of situations where it may improve readability usually look somewhat like this:

    void print_vector_of_maps(std::vector<std::map<std::string, std::string>> v)
    {
        for (const std::map<std::string, std::string>& item: v) {
            for (const std::pair<std::string, std::string>& entry: item) {
                std::cout << entry.first << ": " << entry.second << '\n';
            }
            std::cout << "---\n";
        }
    }
With the auto keyword, it looks like this:

    void print_vector_of_maps(std::vector<std::map<std::string, std::string>> v)
    {
        for (const auto& item: v) {
            for (const auto& entry: item) {
                std::cout << entry.first << ": " << entry.second << '\n';
            }
            std::cout << "---\n";
        }
    }
The types are still known from the surrounding context and the readability has arguably improved.


The auto example here looks strictly worse than just creating a descriptive typedef/typename to shorten the syntax of the first approach.

I’m all for shorter syntax, just not at the expense of completely explicit type annotation.


The usage of the word "strictly" here is strictly sloppy.

Why is a typedef/typename better than using auto in this case? You can literally look up the actual type the auto keyword is deducing by looking at the function parameters two lines above.

How many new typedefs do you want to create for code like this? You are suggesting creating two typedefs for this tiny 4-line code only. Imagine how many typedefs you might need to create in a real and complex project. How is the mess of typedefs going to be any more maintainable than this?

There are places where the auto keyword would make it hard to understand the type and there are places where it won't. This is a place where the auto keyword does not make it hard to understand the type.

Don't use the auto keyword if you don't want to but pretending that there is no value in using the auto keyword by using weasel words such as "strictly" is disingenuous.


> “Why is a typedef/typename better than using auto in this case? You can literally look up the actual type the auto keyword is deducing by looking at the function parameters two lines above.”

Because a typedef makes the syntax equally as short as auto in all practical senses, while keeping the item type explicit. Even when figuring out the item type only requires looking up a few lines, this limitation of auto is still worse while not providing any “terseness” advantage. And obviously ising auto as a kludge for a type of polynorphism where you switch around header files, etc., to arrange the upstream type to change and rely on other code’s use of auto to “just work” is a severe and critically flawed kind of anti-pattern.

In most cases it won’t be anywhere near this easy anyway, the the type of the function parameter should already have had an application-specific typedef to understand its meaning, so the extra use in places where auto is used would amortize it further.

And since you get much more benefit from the typedef way in more complex situations, it would be better style to just use it consistently by using it in simpler situationd too, especially since when the situation is simple, like this example, auto by definition doesn’t provide an advantage.

This is why I specifically used the word “strictly” like the mathematical sense of strictly increasing or a dominated function. The typedef approach for this is superior unilaterally.


> In most cases it won’t be anywhere near this easy anyway

Then don't use auto in such cases -- is the point of the discussion here.

Use auto anywhere it is indeed nearly this easy.

When the situation is simple, auto clearly provides an advantage of automatically deducing types that are very obvious to humans and the compiler. The argument here is that not having explicit types here is better.

Your cannot use the word "strictly" here in the mathematical sense at all since you have not mathematically established that explicit types for simple cases are better. It is only your subjective opinion that explicit types are better even in simple cases and such an opinion is far from the exactness and axiomatic nature of mathematics.

You pretend as if the proposition that "explicit types are better" is an axiom. It is not. To many of us such proposition is neither self-evident nor obvious because many of us believe removing explicit types for simple cases is better. As such, the usage of "strictly" here is merely a weasel word.


> “When the situation is simple, auto clearly provides an advantage of automatically deducing types that are very obvious to humans and the compiler. The argument here is that not having explicit types here is better”

That’s not an argument for anything. It amounts to just saying “worse is better” because you have a preference for how it looks with auto, even though being explicit with the type information is not about you.

There’s absolutely no sense in which less explicit information is somehow better for code readers. The only reason you might choose to pay the cost of having less explicit type info is if it gives you some other benefit, like shortening syntax.

But since typedef/typename offer that too, without the loss of being explicit, it just means auto would be a suboptimal choice even in the shorter/simpler case too.


> There’s absolutely no sense in which less explicit information is somehow better for code readers.

This is patently false. It has been demonstrated more than once in this thread that there clearly are cases where the types can be deduced effortlessly and avoiding explicit information leads to cleaner code.

Your claim is only your subjective opinion/belief and definitely not a fact.

We might just have to agree to disagree, but I would disagree with your claim here in the strongest possible terms. Constantly pretending as if your subjective beliefs comes off as obtuse and is in poor taste. There is one size fits all standard for coding. You are free to follow your coding standards and so is everyone else. What looks like an obvious argument to you may appear absurd to another. Like I said, the world of software development is far more heterogenous that you seem to believe it is.


> “It has been demonstrated several times in this thread that there clearly are cases where the types can be deduced effortlessly and avoiding explicit information leads to cleaner code.”

I do not see a single example in this thread that demonstrates that. Absence of explicit type info is an inherently bad thing. It’s never good to offer code readers less explicit / more ambiguous info.

Like I said, you’d expect some offsetting benefit (what you call “cleaner code”) to compensate for paying the penalty of giving up valuable explicit info. But auto does not actually provide that in comparison to typedef/typename. auto, by comparison, is not meaningfully cleaner code. It only loses valuable explicit information, without an offsetting benefit that couldn’t otherwise be obtained without losing that valuable explicit information.


> Absence of explicit type info is an inherently bad thing.

Let me reiterate: It is your opinion which many people here do not agree with in the context of types obvious from context. Your opinion is definitely not a popular coding standard and definitely not a fact. Most of the coding standards are in favor of using auto for simple code where the type is immediately obvious from the context.

The auto keyword is available to help people who do not share your opinion. The auto keyword is present for teams which agree that absence of explicit type info is a good thing for simple code with obvious types (obvious from the context).

Teams that believe that absence of explicit type info is an inherently bad thing don't have to use the auto keyword. But that belief does not become a fact no matter how many times you reiterate that belief.

> paying the penalty of giving up valuable explicit info

That's your opinion. My opinion which many coding standards agree with: There is no penalty in giving up explicit type info in a local loop variable. The typo info is not valuable in that case.


In my comments my goal was to describe why those other beliefs about auto are just fundamentally mistaken. To lobby for a certain way to think of it.

My perspective on this actually is a very popular coding standard in some communities. For example, in The Zen of Python [0] (which is an official Python PEP) there is, “Explicit is better than implicit” right in the second line. It’s quite a widely regarded principle in programming.

Still, given that I only lobbied to persuade that my perspective is the right one in this case, and did not ever try to claim that people cannot have a different opinion, your comment seems unrelated to earlier part of the discussion.

[0]: < https://www.python.org/dev/peps/pep-0020/ >.


> When are you completely changing types all the time?

I do, often enough (multiple times per month at least) that I have headers of typedefs for the alternative types that I have to use and which provide different performance characteristics and behaviours.


In that case, it sounds like very bad general software design that is not well-crafted to the problems you are solving. Instead of building the right kind of extensibility in light of actual requirements that have self-evidently required you to change underlying types a lot, you are misusing something like auto to instead paper over a bad design.

Needing to constantly switch things by creating different sets of headers with different groups of typedefs, and then rely on the codebase to have happened to use auto in just the right ways that things still compile correctly with the swapped-in types is horrible.

Like, absolutely amateur, risk of getting fired level of badness. I can scarcely think of a way to design system-level polymorphism that would be more directly hostile to new developers or other people maintaining your code than the approach you’re describing.

This sounds like the type of pathology when a developer creates a needlessly complex system purely for rent-seeking and job security. You’ve built a system nobody else can maintain because ever so slight changes to the way it uses auto can bring down the whole house of cards from your variety of special header files. Nobody else wants to catch that falling sword, so when they have to work on or change that code, they just tiptoe on eggshells to mimic whatever existing use of auto there is, out of complete fear that more specific type annotation is going to break things. This is not a hallmark of a good project. More like painting everyone else into a corner that only you happen to prefer.


> In that case, it sounds like very bad general software design that is not well-crafted to the problems you are solving.

That does not sound like bad software to me at all. It is quite a stretch to say that types changing during the course of software development implies the software is badly designed. This sound very backwards.

On the other hand, I believe a software that is malleable enough to handle changing types easily during the course of software development is good software.


> “That does not sound like bad software to me at all. It is quite a stretch to say that types changing during the course of software development implies the software is badly designed. This sound very backwards.”

But nobody said that. The parent comment said the system has a requirement to deal with changing types. OK. Sure. Happens all the time.

Solving it by using auto all over and rotating header files, now that is the part that is simply indefensibly bad. It leaves a codebase where nobody can reason locally about any types, because they would be compile-time dependent on which headers you choose, which is insane. Accidentally make one local variable non-auto by annotating a specific type, and suddenly it works with one set of headers but fails indecipherably with a different set.

There are many better approaches, like inheritance-based designs for specialized subtypes, or using discriminated union types and multiple dispatch patterns.

You’re acting as if the parent comment was just talking about mundane refactoring of types here or there, or some small changing requirement, but it’s not at all.

It’s talking about some kind of problem where the commenter claims that huge amounts of types tracing throughout the whole project have to change wholesale by rotating in different headers and relying on auto as essentially a kludge version of whole-program polymorphism.

> “On the other hand, I believe a software that is malleable enough to handle changing types easily during the course of software development is good software.”

I believe that too, which is why this kind of system that relies on the whole codebase having no localized type annotations (so you can’t reason locally about code when you go to refactor or change types, etc.), is so egregiously bad. Making it robust to type changes is a property of how well other developers can modify the code itself and reason about it. Using auto dogmatically like that totally strips away someone else’s ability to do that.


> Solving it by using auto all over and rotating header files, now that is the part that is simply indefensibly bad. It leaves a codebase where nobody can reason locally about any types, because they would be compile-time dependent on which headers you choose, which is insane.

I feel that you are overreacting a bit. One of my use cases, for instance, is : https://github.com/OSSIA/libossia/blob/master/OSSIA/ossia/de...

The software I work on performs sometimes up to ~25% better with flat_hash_map - some projects I worked on (as an user of the software) would simply not have been feasible without it or a similar container. Would you seriously consider replacing your map implementation with a variant, or worse, inheritance which forces you to heap allocate and have an additional indirection ? That would be literal madness here, where I'm fighting to make a tick function run in 15 microseconds instead of 20.

The theoretically cleanest solution would of course be to make the whole codebase generic on Map_T. I hope that you see why this is not optimal in C++ when you have ~100 unit test executables - each test will literally need to recompile the whole stuff.

> Accidentally make one local variable non-auto by annotating a specific type, and suddenly it works with one set of headers but fails indecipherably with a different set.

Except it does not because the types conform to the same concepts.


> this kind of system that relies on the whole codebase having no localized type annotations (so you can’t reason locally about code when you go to refactor or change types, etc.), is so egregiously bad.

Nobody said that either. The parent comment specifically said that they have alternate types to provide different performance characteristics and behavior. Such code bases do exist contrary to what you believe.

Don't think of it as an Apple type suddenly replaced with an Orange type on one fine day and your head begins to spin when you try to understand what the code does.

Think of it more like int32_t being replaced with int_fast32_t because you realize you want to port the code to a new architecture due to which int_fast32_t makes more sense now. Now the argument should not appear as "egregiously bad" as you make it look like.

The world of software development is far more heterogeneous than what your experience might have led you to believe.


My particular cases are with various implementations of hash maps and ordered maps that I switch, for instance std::{unordered_,}map, boost::flat_map, pubby::flat_map, tsl::hopscotch_map, ska::flat_hash_map, and a few others.


It’s funny to hear someone say it is “very rigid beliefs” to disagree with dogmatically using auto everywhere just to get polymorphism by swapping header files, which is a far more drastic and brittle belief.

I have worked on systems deploying to embedded devices, so I am familiar with e.g. needing to suddenly change a bunch of code to cause some typedef to refer to int16 because a platform doesn’t support int32, and needing the codebase to be robust to the change. Whatever solution you pick, sacrificing localized explicit type info is not a good trade-off, and ability to read code and locally easily reason about types only gets more critical as the application gets more architecture or performance dependent.

Besides, many systems that need to achieve this same effect do so by either using discriminated union (std::variant) and multiple dispatch patterns, or by designing the different specific subtypes to fit into an inheritance hierarchy, so that switching things to use e.g. different integer precisions is a matter of using a different, specialized subtype. Forcing the use of auto everywhere and switching types by header files would be a nuclear option by comparison.


> ability to read code and locally easily reason about types only gets more critical as the application gets more architecture or performance dependent

Exactly! And using auto for variables with very limited scope where the type is obvious effortlessly does not negatively affect the ability to read code and in my opinion improves the ability to read code.


It's to avoid having to write out complex types that the compiler already knows about.

It's particularly useful with generic code that makes heavy use of templates, or code that uses iterators. In these cases, it just isn't that useful for the reader to know the exact type.


I think it was introduced to help with things like lambdas, or other structures that the coder may not know the type of, or it would be difficult to work out the type.

Then the usage of auto was expanded to help when writing generic code.

I think both of these cases are valid uses of auto, but people seem to have run away with it and will literally use it for statements like auto i = 0;

As a professional C++ coder, I personally don't have a problem with auto (or var in C# for that matter) but I'd prefer people use it where it's useful rather than for every little thing.


I agree, but TFA said:

> modern C++ style using auto almost whenever possible.

Using auto everywhere, to me, is bad code. It's hard to understand what types we're dealing with, at a glance.


If you use an IDE, it is very easy to look at the type inferred by auto. So no worrying about types unless you want to.


What if you’re trying to read the code in a GitHub PR diff? Now I can’t review it because I actually have to go waste time getting the code into some IDE that offers me hovering type info? And what if you use only a console-based editor? Now you gotta abandon the developer tools that work for you because of some language feature?

It sounds crazy to me to hear someone say that it’s OK to tolerate the downsides of a language feature because some other tool like an IDE might mitigate them in some cases.

Why are we contorting our tools to overcome language problems and acting like it’s OK, instead of designing better languages in the first place that don’t require tooling gymnastics to make them readable?


`auto` does not make your code any less statically typed : http://webm.land/media/hJeQ.webm .


Who said that it did? I never said anything like that: only that auto removes useful type annotations.


auto can be very handy in writing template code; situations in which the type depends on the templated type(s).


You assume that the point of using types is readability, which is false.

Types let the compiler reject a large set of invalid programs, which would otherwise produce a runtime error in a dynamic language.

    for (auto foo : foos) {
        foo.bar()
    }
I don't care about the exact type of `foo`, so I let the compiler deduce it behind the scenes. However, I still want it to break my build if `bar()` doesn't exist!


Your example does not convince me. As a code reader, I would care very much why does foo have a bar() method. Is it because foo is a Widget or a subclass thereof? Then I can backtrack to understand the implementation being inherited. Or maybe it’s some other class that uses bar() for a totally different purpose.

Who cares if the compiler can deduce it. That’s not helpful to me at all if I am scanning this section of code because I have to add more method calls inside the loop to calculate new results, and I don’t know what base type foo even is to get started with what would be required.

> “You assume that the point of using types is readability, which is false.”

This just seems wrong to me. Communicating your designed intention with type annotations is the number one reason to use them, whether you’re writing C++ or Haskell.

Getting the compiler’s help to prove correctness or avoid bugs is a secondary reason. Sometimes it becomes the primary reason in isolated cases. But usually it’s a secondary consideration nowhere near as important as using types to communicate the design of the program.


> This just seems wrong to me. Communicating your designed intention with type annotations is the number one reason to use them, whether you’re writing C++ or Haskell.

sure, but most haskell programs leave type annotations out, it's only some specific parts where it enhances clarity that it helps ; most of the time it would be unneeded noise, just like in C++.

I really don't want to go back to the dark days of

    for(std::vector<std::pair<float, int>>::const_iterator it = v.cbegin(), end = v.cend(); it != end; ) {
      if(it->second > 1 && std::distance(v.begin(), it) % rand() == 0) 
        it = v.erase(it);
      else 
        ++it; 
    }
versus

    for(auto it = v.cbegin(), end = v.cend(); it != end; ) {
      if(it->second > 1 && std::distance(v.begin(), it) % rand() == 0) 
        it = v.erase(it);
      else 
        ++it; 
    }
In a lot of cases, it can even cause bugs : for instance, the iterator type of std::map<int, std::string> is std::pair<const int, std::string>.

If you ever happened to write :

    std::map<int, std::string> map; 
    // ...
    const std::pair<int, std::string>& value = *map.begin(); 
then congrats, you have unneeded memory allocations because the pair will be copied in order to match the explicit type.

whereas here :

    const auto& value = *map.begin(); 
you know that you get exactly the correct type - maybe at some point you will replace with a boost::flat_map or another which has a different iterator type, and when you do this, you won't need to change any other code anywhere else - it will be automatically correct.


With reasonable typedefs the first loop becomes:

for(my_vector_t::const_iterator it = v.cbegin(), end = v.cend(); it != end; ) { ... }

and if that looks long, then the problem is poor choice of type names in STL. I.e if the const_iterator would be citer, then with

    for(my_vector_t::citer it = v.cbegin(), end = v.cend(); it != end; ) {
    ...
    }
the need for auto is greatly reduced. Besides, if one just uses indexes, the loop becomes:

    for(sized_t i = 0; i != v.size(); ) {
        if (v[i].second > 1 && i % rand() == 0) {
            v.erase(v.begin() + i);
        else
            ++i;
    } 
that does not have dangling reference bug from the original loop, easier to follow and with sane compiler one gets the same code.

Surely the code only works with vectors, but when one deals with vector of pairs, it implies very specific problem that rather unlikely to change. It also show that the need for auto really comes from the design of STL that leads to very long types that auto mitigates.


In fact, in templated code decltype(v)::const_iterator might be preferable to plain auto in some cases as it forces a type that does have one of these...


> Surely the code only works with vectors, but when one deals with vector of pairs, it implies very specific problem that rather unlikely to change.

maybe, maybe not. I often have much deeper changes than this.

> It also show that the need for auto really comes from the design of STL that leads to very long types that auto mitigates.

the same problem exists with int32_t or uint32_t or even int or char. The point is, using auto makes your code much more resilient to changes and much less likely to introduce bugs or performance losses due to unwanted conversions.


> “sure, but most haskell programs leave type annotations out, it's only some specific parts where it enhances clarity ”

This is very false, I can tell you from having written Haskell in production for several years. It’s widely regarded as an anti-pattern in Haskell to leave out type annotations, even for constant values, except in the most trivial of cases.

Type annotations are a design feature of Haskell. You usually use the “type” keyword to define custom type names for primitive types just to give them descriptive, application-specific names, like

type PhoneBook = [(Name, Number)]

just so you can add more explicit annotations and communicate the flow of your program very explicitly in the type annotations.

You definitely do not look at it like minimally using type annotations “only where it adds clarity” because the whole point is that as the code author you don’t know where clarity is needed. That is for the code reader who has to change your work later on.

Incidentally, I actually very much think that your first example of that for loop at the bottom is a lot clearer than the one using auto.

Just make a descriptive typedef / typename of that long const_iterator type. Make something readable so that we know why there is a vector of pairs of (float, int).

If you want to cut down the syntax there, auto is not the right way. A descriptive typedef is the right way using type alias names that communicate your intended design.


> You definitely do not look at it like minimally using type annotations “only where it adds clarity” because the whole point is that as the code author you don’t know where clarity is needed. That is for the code reader who has to change your work later on.

Are you serious ? In standard Haskell you can't even add type annotations to bindings, that's how much type annotations are relevant to the language. They're only available in type & function definitions. I seriously doubt that you use

    {-# LANGUAGE ScopedTypeVariables #-}
and that all of your code looks like

    main = 
      let (x :: Integer) = 123 in 
      print $ x
That's literally what you expect C++ programmers to do when you ask them not to use auto, and I've absolutely never seen production Haskell looking like this - common Haskell is much closer to this : https://github.com/ivanperez-keera/haskanoid/blob/master/src....


Nah. ScopedTypeVariables is extremely commonly used, and Haskell as a language is one where frequent use of extensions is common generally. Saying “you can’t even add type annotations to let bindings” shows a misunderstanding of Haskell, because you can and the way the language accomplishes it is with ScopedTypeVariables. It’s not “special” or “extra” in Haskell, using extension pragmas is just a super common, mundane, every-day thing in Haskell.

I should also point out that when you say

> “They're only available in type & function definitions.”

you are wrong.

Firstly, you can (and should almost always) use type annotations for plain values, like

pi :: Float

pi = 3.14

and e.g. with more complex data constructors like in < https://en.m.wikibooks.org/wiki/Haskell/Type_declarations > section on constructor functions.

For value constructors that use complex type dynamism in Haskell, like the return value polymorphism in Haskell regular expressions, it’s critical to use type annotations on plain values, because the type annotation actually resolves which of the polymorphic return types will be chosen.

Secondly, and more importantly, you can use type annotations on bindings natively in a where clause, and this is a very fundamentally important thing to do, especially for locally defined helper functions.

Then you only need ScopedTypeVariables if your helper function needs to involve a generic type constraint from the polymorphism of the outer function declaration, which is vastly more rare.

I’d also say the comparison with the types of local variables that tend to be unannotated in Haskell let or where clauses is very disingenuous and it’s actually not similar to the iterator boilerplate example you gave before at all.

When all the arguments have well-annotated types, and you use a let or where variable to hold a temporary value, the type is inferred by a reader much more easily than when using auto in C++, and has a direct meaning in terms of the calculation at hand (e.g. it’s not done to reduce boilerplate like auto).

For example, if you’re trying to calculate distance between two tuples (x0, y0) and (x1, y1), where the types of those tuple elements are given explicitly just a few lines up, then there is little harm in doing something like

where dx = x1 - x0

though it’s still preferable to say e.g.

where dx = x1 - x0 :: Float

This is emphatically not what you’re doing with auto, because the types are not otherwise locally obvious, and even if they were, a judicious use of typedef / typename saves just as much syntax while being far more descriptive.


now let's be serious. A google search for

    "ScopedTypeVariables"
returns 13 300 results. That's minuscule. But well, let's just say that we have a fairly different experience of haskell code.

> When all the arguments have well-annotated types, and you use a let or where variable to hold a temporary value, the type is inferred by a reader much more easily than when using auto in C++, and has a direct meaning in terms of the calculation at hand (e.g. it’s not done to reduce boilerplate like auto).

... In (non-generic) c++, function arguments always have types. You can't have `auto` in type declarations (thankfully it should come in C++20) - the comparison stands.

I've heard so many reticences to auto like yours from coworkers and interns, and then after three days in a large auto-enabled codebase no one ever did go back to writing unnecessary types.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: