Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Modern C++ can be very pythonic. Still not as easy as writing Java in IntelliJ but CLion is becoming quite good (if you succumb to CMake). People write shit code in every language; C++ has my favorite abstraction to performance ratio.

http://preshing.com/20141202/cpp-has-become-more-pythonic



As a non-C++ coder there are two things that "scare" me about C++ (ok, maybe not scare, but are definitely big turn offs):

- lack of a proper, cross platform, package manager: I know the history, I know that C/C++ are basically embedded in the system and they use the system package manager, but in this age it just doesn't fell right... I've been spoiled by many years of using Maven and other similar tools

- template meta-programming plus the C++ legacy features which are probably lurking in many third-party libraries (simpler languages have fewer features to abuse and even when for whatever reason you have to use a relatively subpart third-party library, you know they can't do that much damage)

There's also the compilation speed, but as far as I know there are workarounds for that.

These things are, if not unsolvable in C++, at least many years away from being available.

Maybe an experienced C++ coder can destroy my preconceptions :)


> Maybe an experienced C++ coder can destroy my preconceptions :)

- Generally C++ uses the OS-level package manager, like rpm or dpkg. It's not cross-platform and it doesn't have a nice interface in the source code (#include is... unfortunate), but there is a nice solution here if your organization has the discipline and tooling expertise to make it work.

- Template metaprogramming isn't bad to use at all. Everyone uses it every time they construct a vector<>. It takes a lot of skill to write, though, and unfortunately even users of TMP can end up with nasty and bloated compiler errors. Hopefully this will be addressed by "concepts" in forthcoming versions of C++, but it might be a while before that filters down to stable C++ compilers and libraries.

- The real issue that should scare you is something called "undefined behavior". It's very easy to do subtle things that cause you programs to break in weird and hard to debug ways. Nasty side effects that could theoretically include any side effect. Like saving garbage to your production database. By doing something as innocuous as overrunning an array.

https://kukuruku.co/post/undefined-behavior-and-fermats-last...

https://stackoverflow.com/questions/367633/what-are-all-the-...

There are tricks and tools to avoid undefined behavior, but I find that most C++ engineering orgs don't prioritize the tooling (linters, UB sanitizers, etc.) needed to let engineers do this productively. Consequently, a lot of writing good C++ involves detailed coding standards and/or very detail-oriented gurus poring over code reviews.


And array over-runs are just the beginning! (friends don't let friends use idices over iterators)

Had a fun bug at work once where we somehow accidentally built against two subtly incompatible versions of the same library.

This actually mostly worked until some random virtual function call sometimes called the wrong function silently...


Yes, C++ without a monorepo is very difficult.


Yeah, undefined behaviour was my thought too. There's a few warts with templates etc. but undefined behaviour is the granddaddy of all skeletons on the linguistic closet.

Realistically, C++ isn't actually a well-defined language unless you specify platform and language. You don't program in C++. You program in a C++-derived language jointly defined by the compiler and the hardware.


Besides the confusing error messages, C++ templates also tend to be weakly typed.


I'm a former C++ programmer (I haven't had an assignment that used C++ for years, though I pressed for a limited subset on a recent embedded system) I likely never reached the sophistication of most of the respondents here and my concerns may have been addressed.

The things that concerned me about C++ were the pitfalls that could happen due to ignorance or omission vs. errors introduced by commission. For example forgetting to implement a copy constructor for a class that allocates member data. (That results in two objects that hold pointers to the allocated data following a copy operation and if one is deleted, the other points to now deleted data.) Perhaps there are better compiler warnings or static analysis tools that will reveal problems like that today.

My other concern is how big the language has grown. I recall my excitement when I first worked with C++ since it provided direct support for a lot of things I was doing with C (Object orientation, function pointers in structs and similar) with the addition of better type safety. Nowadays many of the language features are incomprehensible and beyond my skill level. I suppose largely because I don't work with C++ on a day to day basis.


If you use a proper smart pointer time to store an owned object as recommended, you'll get sensible behavior: either the copy will be shallow (with shared_ptr) or the implicit copy constructor instantiation will fail to compile (with unique_ptr).


Doesn't std::shared_pointer help with the copy constructor issue?


If you mean using shared_ptr everywhere, then 1) this would be extremely non-idiomatic C++, 2) you can still forget to do so, and 3) the moment you do, the language makes it ridiculously easy to implicitly copy something.


I am a former C++ programmer who has moved on to the other languages/technologies and no longer suffer from Stockholm Syndrome.

This is just the tip of the iceberg. There are many dark corners, historic baggage, and complex interplay of language features in C++. This complexity breeds bad codebases plagued by subtle bugs created by misunderstanding of finer points of the language.

If these things scare you, run away in the other direction as fast as possible.


I, on the other hand, am a primarily Python programmer being slowly lured to the Dark Side.

The thing I love about using C++ is that you get direct access to all these rock-solid libraries in C that everything is built on -- zlib, libcurl, etc. So instead of dealing with a tower of wrappers in a high-level language, each of which has quirks and bugs, you get that core lib, where you can safely assume 99% of the time any problem is your own fault. Even when a C++ wrapper exists, I use the C API just to reduce deps and avoid the kind of problems you're talking about, which I think boil down to templates most of the time. I don't use templates unless there is no other way.

This strategy works pretty well for me, but my codebase isn't huge and I'm not doing anything super complex -- just trying to speed up things that are too slow in Python, be it parsing or data structures. I think in practice it is best to use multiple technologies if possible in a project, each one suited to the task.

And WRT the coroutines in OP, I don't know the details, but the thing I miss most from Python in C++ is yield and generators, which hopefully coroutines will eventually help with. Defining your own iterators is very annoying.


> you get direct access to all these rock-solid libraries

> in C that everything is built on -- zlib, libcurl, etc.

> So instead of dealing with a tower of wrappers in a

> high-level language

That's what I love about Objective-C: you get that, the direct access, and at the same time transparent, non-wrappered access to a high-level, dynamic object-oriented language.


Don't forget about Objective-C++! That's where the true powers of the dark side are.

Side Note: Objective-C is such a criminally underrated language. I often see people complaining about the syntax, but it's just syntax. Once you get over it, I find that the language makes the OOP paradigm a joy to work with.


Honestly, I never seriously considered it. I thought it was more-or-less an "Apple thing". OFC it is supported on other platforms, but seemingly not widely used (I write scientific code that really only has to work on Linux).

But after looking at its feature set, I wonder how easy it is to get by without templates? It seemingly uses weak typing instead like Go for scenarios that would normally require generics/templates. My attitude towards templates/generics is that they are absolutely necessary for a good standard library and core containers, but implementation of new templates in your own code should be rare.

Also, I do like namespaces in general, although not necessarily C++'s implementation of them.


I'll be a contrarian here and say that Objective-C is an ugly mess, and not even "because brackets". It's the language full of terrible hacks, historic baggage, and bolted-on features. Objective-C++ is basically the worst of both worlds. :)

Programming languages have advanced a fair bit since the 80s, it's time for Objective-C to die peacefully.


Have you ever tried Cython? It's always seemed cool to me, so I've always planned on using that if I ran into a case where I needed (relatively) direct C library access or wanted to speed things up beyond what straight Python is capable of.


Sure. I use it for some things and straight C++ for others. Cython is good for simple things, like small tight loops.

One thing that is much more reliable in pure C++ is any kind of threading or OpenMP etc. Theoretically Cython has it, in practice it can cause very weird problems. Also if you want to use C (or C++) libs in Cython, you have to manually declare all the function prototypes before using them.

Also Cython has a tendency to make minor breaking changes to syntax every version, breaking my code and/or making it version-dependent. Since I distribute to others, the best method for linking performance intensive-code to Python I have found is:

1. Write the core in C++

2. extern "C" a simple C API for it

3. Compile both to a SO

4. Access the SO from ctypes

It is more robust than Cython IME. Another advantage of this approach is that you have now have a generic C library you can use anywhere else if you want, not just Python. And you don't have to link -lpython so the C lib can be used in places where Python isn't installed. Finally, it can be nice to have some C++ binaries for startup time, and AFAIK you can't compile binaries with setuptools.


Thank you for your very informative answer!


If you're into both C++ and Python, I recommend pybind11 [1] - Cython is a crutch and moreover C++ is not a first class citizen in it. For reasonably large codebases Cython just doesn't scale and debugging is a total nightmare.

Disclaimer: I'm a contributor to pybind11.

[1] https://github.com/pybind/pybind11


Please note that I specifically singled out C++ here, not C. C is not C++ (not even technically a subset of C++). It is a very different and a much smaller language.


Yes, I know they are different, but I was talking about C++. I heavily use the STL and greatly prefer it to C idioms. I also like namespaces. Templates can be very hairy but void pointers are not great either.

In short, I simply think C++ can be used responsibly if you limit yourself to the subset of features that you (or your team) can understand.

For example, there are people who go crazy with Python metaclasses and such. A statement about interacting language features being dangerous could apply to anyone in any language who is trying to be unnecessarily clever.


C'mon over to the Go side instead ;-)


Template meta programming is unnecessary to use C++, even in its most complex of applications. It's nice if you can do it, but a lot of companies don't like developers to use TMP too much because it's not expected everyone will know or understand it. I consider it an extension to the language that is cool but not necessary for most of the work you do. It's great that it is there, and great that your compiler and library authors use it to make the tools you use very fast, but for your own code, it's just not a gripe you need to worry about.


I see this sentiment all the time, in every programming language - x feature is for library authors, but it is bad for application devs because we cannot understand it. It's a terrible notion.

Royal you and we below, not specifically hellofunk or op.

You use the libraries, so your application uses the feature already. You should understand one layer of code beneath yours. You should have an interface layer between your code and the library. By not doing either ( because libraries use magical features ) you are coding yourself into a corner with a feature you don't understand.

The libraries use it so that their code can be more general and well-specified. The libraries are used by thousnands more developers than your code. Your code probably could also be easier to maintain if it was more general and well-specified. Imagine if thousands of people could reuse your authentication interface at your company! Think of the money you will save.

All it takes is time to learn a feature you must partially understand because you are using the library built ON that feature.

EITHER learn the feature, or use libraries that only use features you understand. If you can't use the language without the feature, you can't use the language without understanding the feature. If you allow libraries that use the feature in your app code, you should allow app code that uses the same feature.


I don't quite understand this, sorry. Say I'm using a very high-level concurrency library, for lack of a better example of the top of my head. Now this library uses lots of kung-fu to spread tasks around to threads without me having to properly understand the complex techniques of thread pools and a myriad other things. After all, the library exists to make my life easier.

Are you saying I should not use such a library unless I understand all the techniques it uses? I doubt a significant portion of developers fall into that category of library users.


Yes. If you don't understand one layer below the code you wrote (a rung below on the ladder of abstraction) then you cannot understand if you are even using the correct abstraction (if the library is doing the thing it says it does when you call x). You literally have no idea what your code does, or even should do.

A linked list is a perfect example. If you use a linked list, you should know the basic properties of a linked list. When given one from an unfamiliar library, you should go read its code. You should run some basic performance tests on it. You should separate it from your code via some basic delegating api, so that you can replace it with another implementation in the future.

And concurrency is another example - do you need, and does, your chosen library give you parallel execution, or merely asynchronous execution.

This isn't as onerous as it sounds -- you do it all the time. But blindly trusting any software without a proof of its correctness (encryption is an example of where you might not understand ecen the interface layet of the software, but use it directly anyway) is a dangerous habit.

If there's magic there that you don't understand and you choose the library, when that magic fails to satisfy some goal you need to understand why so that you can fix it, implement your own workaround, or swap it out with a library that does what you want.

If you use a concurrency library, and don't understand threadpools, block in a thread and threadlock your application, you are in trouble.

If you use IEnumerable, and don't filter out the results before you toList, you are in trouble.

If you use monads and think flatmap is stack safe, always, you are in trouble.

Note you don't need to know ALL the tricks. Just the ones employed by the functions you call.


I don't think you are making the point you think you are making, or you being unclear. Take the concurrency example. The is a big difference in understanding the high-level concept of parallelism vs serial execution, which you suggest any library user understand, vs. understanding the C++ techniques and raw code that interacts with low-level threads to provide the library functionality. A good library does not require its users to understand its implementation in order to use its API -- that would be ridiculous most of the time. The burden falls on library authors to provide a good API abstraction that does not require users to understand the implementation; though some bad APIs may in fact require this of the users due to poor design.


Yes, I think that I'm being unclear.

As a user, I need to understand the top level abstraction of the library I am using - the api. I need to read that api's code and understand what it does, (not any deeper) in order to use it.

If it is a low-level api that directly implements a thread-pool, I need to understand that. If it is a high-level template that implements the async continuation monad[1], I need to understand the api surface that I call, and just the body of that surface and type signature.

1: https://www.fpcomplete.com/blog/2012/06/asynchronous-api-in-...


> lack of a proper, cross platform, package manager: I know the history, I know that C/C++ are basically embedded in the system and they use the system package manager,

I think the problem is deeper than that and that the problem could be that C++ does not have a standard ABI. It is compiler-specific. Therefore, having a system package manager does not work (as it does for C).

I compile my personal C++ codebase with four different compilers and different debug settings. The only way to manage that is to have a monorepo with all my dependencies checked in .


I've been wondering if if a package manager is something that should be considered part of a language or just a distribution of that language - take npm for instance. It's not a "package manager for JavaScript." It's a package manager for node.js, which is a javascript runtime environment. Python's pip was added long after the inception of the language, there are plenty of package managers for PHP, etc.


there is the https://buckaroo.pm package manager.


If you understand the internals they're anything but pythonic. Coroutines can be a zero overhead abstraction in the some cases.

For example here is a test that verifies the optimized output for a simple (and misnamed) integer generator compiled at -O2. Notice the arguments to `print` are integer literals. (https://github.com/GorNishanov/clang/blob/Experiments/test/C...)

I'm no python expert, but I suspect C++ optimizes this code better.


I think the point of being pythonic is that it's more friendly to the programmers. Bonus point if it's also optimized :)


I love seeing the difference between Python and C/C++ coder cultures: To a Python'er, bonus points are given for performance and friendly coding is nearly assumed. To a C++'er, bonus points are awarded for friendly coding, and performance is nearly assumed.

(Even more, I love how we're seeing some convergence here: no-compromise improvements to weak spots on each "side" of the programming language spectrum.)


C++ is one of the least Pythonic languages I know. It clashes with nearly every point on the Zen of Python[0], and C++'s "modern" libraries do nearly nothing to change that.

[0]: https://www.python.org/dev/peps/pep-0020


> It clashes with nearly every point on the Zen of Python

I'd argue that C++ is more pythonic than Python. For instance:

> Explicit is better than implicit.

Since when is dynamic typing more explicit than static typing ?

> Errors should never pass silently.

Well guess what, with C++ they don't even pass compilation.


Since dynamic typing and static typing are not the same thing.

"Dynamic" doesnt mean hidden. It means types are features of objects at run time.

"Static" doest mean explicit. It means types are rules the compiler applies to the program text.


>> Errors should never pass silently.

> Well guess what, with C++ they don't even pass compilation.

int * a;

*a = 5;

Segmentation fault. Oopsy.

To be fair, "g++ -Wall" does complain that "a" is not initialized. But it is a warning, not an error.

(And don't tell me that "you are not supposed to use pointers". They exist, someone is going to use them and proceed to shoot themselves in the foot.).


I've just been debugging/fixing a bunch of C++ code that was incorrect due to a bunch of implicit casts and normally-invisible undefined behaviour, neither of which would have been possible in Python. I am not convinced.


> Beautiful is better than ugly.

This one is the most apparent. Almost every example in the C++ v. Python in the article above the C++ version is notably uglier. Just look at Lambda expressions:

Python

    def adder(amount):
        return lambda x: x + amount
C++

    auto adder(int amount) {
        return [=](int x){ return x + amount; };
    }
I'm assuming [=] is used to represent Lambda here? That seems like a strange syntactic choice to me.

Edit: turns out there are various forms of Lambda (of course there wouldn't just be one in C++), for ex:

    [a, &b], [this], [=], [&], []
Depending on how the variables are captured. Capturing that additional complexity in a tight space is the cause of the ugliness here.

http://en.cppreference.com/w/cpp/language/lambda


Literally everything you're complaining about is down to differences in abstraction and detail, not aesthetics.

The stuff in square brackets is the capture list. Unlike python, which implicitly captures the full outer scope by reference, C++ allows/requires you to specify which outer variables you want to reference in the lambda, and whether they are captured by value ("=") or reference ("&"). This allows C++ to implement the lambda with a simple static function in most cases, where python needs a bunch of tracking data to preserve the outer stack from collection.

The rest of it is all just following the existing syntax of the language: C++ uses curly braces where python uses bare expression. C++ is typed and requires an "int" declaration on the argument. C++ puts a semicolon at the end of its expressions. You don't have to like this per se, and you're welcome to call it ugly if you want. You're just 30-ish years late to the argument, though.


Rust supports the same level of power and abstraction, yet it makes various aesthetic (ish) decisions that bring it closer to Python than C++ in this example.

Rust implements lambdas the same way as C++, yet it doesn't need capture lists. It has two modes: by default it tries to guess whether to take each variable by move/copy or by reference, but you can specify 'move' on a lambda to have it move/copy all mentioned variables. Not as flexible, right? Actually, it's equivalent in power, because if you want to "capture a variable by reference" in a 'move' lambda, you can manually assign a reference (pointer) to it to a new variable, and move that. With Rust's syntax, the new variable can even have the same name as the original, so it looks very natural:

    {
         let x = &x;
         foo(move || bar(x));
    }
This is a bit more verbose, but most of the time you don't need it.

Like C++, Rust uses semicolons, but it largely unifies expressions with statements. For example, the following are equivalent:

    foo(bar(42));
    foo({ let x = 42; bar(x) })
The syntax for lambdas is "|args| return_expression", so a simple lambda can be very succinct: "|x| x + 1". But just like above, return_expression can also be a braced block, allowing you to have large bodies with multiple statements. In most languages, supporting both blocks and expressions as lambda bodies would require two forms of lambda in the syntax, an added complexity. C++ conservatively chose to support only blocks, while JavaScript and Swift, among others, chose to have two forms. But in Rust, that's just a special case of a more general syntax rule.

Rust is statically typed, but it has type inference, so - among other things - you can usually omit the types of lambda arguments.

So what does the adder example actually look like in Rust? With soon-to-be-stable 'impl Trait' syntax, like this:

    fn adder(amount: u32) -> impl Fn(u32) -> u32 {
        move |x| x + amount
    }
The type declaration is somewhat verbose, but the implementation is quite succinct. The only ugly part IMO is 'move', which would be better if it weren't required here. (Without 'move' it tries to capture 'amount' by reference and complains because that wouldn't borrow check [because it would be a dangling pointer]. But it would be nice if the default lambda mode could decide to move in this case, either because of the lifetime issue or just because 'u32' is a primitive type that's always faster to copy than take an immutable reference to.)


FWIW: this exact scenario (capture a local and add it to the lambda's argument) is treated in the Rust book on closures, and the solution picked isn't yours. They want to use Box::new() to explicitly allocate a heap block to track the closure (in C++ the compiler does this for you and puts it into the function type):

https://doc.rust-lang.org/book/closures.html

I'm not expert enough to decide which is "official". But honestly... this is really not a situation where Rust shines aesthetically.


This is because "impl Trait" was accepted for stabilization about a week ago; the book only covers stable things.


C++ actually has the same problem; you can't specify the return value of an unboxed lambda. The differences there are that C++ lets you deduce function return types since C++14, something Rust doesn't want to support, and that boxing is done through a special std::function type, rather than a normal heap pointer.


> something Rust doesn't want to support

Well, in this way. "impl Trait", which was just accepted for stabilization, will let you return unboxed closures.


The reason Box is used there is that the type of a closure in Rust is anonymous, so can't be named as a return type. Once the `impl Trait` syntax for returns is available, the heap allocation won't be necessary any more, because you'll be able to write `fn factory() -> impl Fn(i32) -> i32 {`


So... a three-character capture list is ugly, but that extra impl clause is... fine? Obviously we shouldn't be getting into a Rust vs. C++ flame war in a clang thread, but would you at least agree that (relative to python) both language have significantly more verbose syntax owing to the interaction between closures and the languages data models?

Edit to clarify (to the extent of my rust understanding anyway): C++ didn't want to implicitly suck in external variables by reference and makes the programmer explicitly flag them as references. Rust has a borrow checker, so it can skip this part and simplify the syntax. But Rust's type system has no good way to implicitly figure out the full type of a lambda by its signature, so if you want to pass one around you need to specify its whole type signature twice, once in the expression itself and then again in the signature of the function that will receive or return it.

You pays your money and you makes your choice. Neither language is really a good fit for anonymous functions in the sense that Lisp was.


The extra syntax in this case is all about static typing (and in the case of Rust, exacerbated by having lifetimes as part of the type).

> But Rust's type system has no good way to implicitly figure out the full type of a lambda by its signature, so if you want to pass one around you need to specify its whole type signature twice, once in the expression itself and then again in the signature of the function that will receive or return it.

Rust has no problem figuring out the type of the lambda. But the type of the lambda is anonymous, and unique to that particular lambda. That type is guaranteed to implement the Fn trait, so that's what you specify when you need to return it. You don't need to specify it twice, however - I'm not sure how that follows from the snippet above?

And C++ has the same exact problem: C++ lambdas also each have their own unique type. However, C++ doesn't have traits at all. In C++, if you want to return a lambda, you'd use std::function, which is roughly analogous to Rust's Box<> for functions in this case - you can't just return the lambda directly (well, you can, but only in a context where the return type is implicit and can be inferred from usage - e.g. when returning from another lambda).


In C++14 and later you can return a lambda by using auto as a return type, which compared to impl Trait is shorter but less strongly typed (so more prone to confusing errors).


> So... a three-character capture list is ugly

IMO, it's 'ugly' not because "[=]" is inherently syntactically ugly, but because of all the variations of capture lists and the subtlety of the differences.

> but that extra impl clause is... fine?

It's not great - in fact, 'impl Trait' is a pretty subtle feature as well. But I think it's more principled.

> would you at least agree that (relative to python) both language have significantly more verbose syntax

Yes…

> owing to the interaction between closures and the languages data models?

…Not really. I'll elaborate later in the post.

> so if you want to pass one around you need to specify its whole type signature twice, once in the expression itself and then again in the signature of the function that will receive or return it.

No, you don't. Generally speaking, it needs to be specified at most once, sometimes zero times. In my example:

    fn adder(amount: u32) -> impl Fn(u32) -> u32 {
        move |x| x + amount
    }
the type is only written once. Admittedly, the expression has its own list of arguments ("|x|" versus "(u32)" in the type), but in the expression, only the names are specified, not types.

The caller of `adder` generally wouldn't need to specify the type:

    let a = adder(1);
    println!("{}", a(2)); // 4
…but if it's stored in a struct or passed across additional function boundaries, it may have to be repeated.

When would it have to be specified zero times? Well, for one case, if there aren't any function boundaries involved:

    let amount = 1;
    let adder = |x| x + amount;
    println!("{}", adder(amount));
But also, higher-order functions are often defined once and used many times. For example, the `map` method on Iterator is defined in the standard library with a full (generic) type signature, but I don't need to declare any types to use it:

    let v = Vec::from_iter(vec![2, 3, 4].iter().map(|x| x + 2)));
    println!("{:?}", v);
Admittedly there's a lot more noise there in general than Lisp or Python, but a lot of that is a desire to be explicit about allocations, not directly related to anonymous functions.

FWIW, not doing type inference across across function boundaries is a semi-artificial limitation. Haskell can do it just fine despite being statically typed, and although Haskell has very different implementation strategies, that's not related to type inference. C++ now sort of does it with `auto`, but only does 'forward reasoning' - i.e. it can deduce the type of an operation's result from the types of its operands, but not vice versa. Rust has true type inference, but it eschews global type inference (across functions) mainly because of a tendency to cause hairy/confusing errors in larger programs. (There are also compilation time concerns, but they're not the primary reason AFAIK.) I suppose you can say that dynamically typed languages are easier to understand when they go wrong, compared to hairy Haskell errors - but I'm not sure to what extent that's actually true, as opposed to functional programmers just tending to use more complex types.

By the way, C++(14) actually doesn't require the lambda argument type to be specified in this case. This works fine:

    auto adder(int amount) {
        return [=](auto x){ return x + amount; };
    }
(but not because of type inference; rather, the returned closure implements operator() for any argument. Also, 'amount' can't be 'auto', which is also an arbitrary limitation, but the rationale is pretty confusing to me considering that the return type can be 'auto'.)


> Literally everything you're complaining about is down to differences in abstraction and detail, not aesthetics.

I'm not complaining. Just pointing out how it's notably uglier, a critical part of being 'Pythonic', which affects both aesthetics (emotional experience) and functionality. That's not a value judgement but an observation.

Python is obviously more limited in power (functionality), which is a product of decisions made about base abstractions. That filters upwards and becomes apparent in both visuals and utility (and as a result learning curves, developer happiness, applicability to problem sets, etc).


"Ugly" is a subjective aesthetic judgment, not an observation.


If it's merely a consequence of choices made to support complexity then I disagree. The aesthetic and utility of the design is a product of many limitations and hard choices made early on. C++ will always be an ugly language because those trade offs were made from the beginning to support a breadth of functionality and configuration.

Simplicity and accessibility are extremely challenging things to support when you have a product that does everything for everyone.

The need for 5 different lambda expressions is a perfect example of that.

I personally think C++ still has a role in the world, as it clearly has lots of adoption. But if you want non-Ugly then those various use-cases have to be broken up into single-purpose (or niche) languages such as Rust for systems programming and Python for 'scripting' and Lua for glue/config code. All of those languages listed have made hard tradeoffs to be good at certain things. C++ and Java did not - largely because they are "yes" languages, as Steve Yegge calls it (the language designers said yes to everyone early on).

If Ugly languages were merely 'bad' or 'wrong' for being ugly then they both would be the most used languages. They clearly have long-lasting value in the industry - which people like Yegge believe is because they are okay with being ugly.


> This allows C++ to implement the lambda with a simple static function in most cases, where python needs a bunch of tracking data to preserve the outer stack from collection.

This isn't inherently about capture lists; Rust doesn't use capture lists, yet ends up doing the same thing as C++ here.


Also note that C++ needs the curly brackets because, differently from python, lambdas can contain arbitrary statements, not just one expression (one would use a local function in python).

There have been discussions about adding a lighter syntax for single expression lambdas that doesn't require the curly brackets and the 'return' keyword, but so far no actual progress.


That C++ line is very C++nic...

It's much better specified, gives you much more power than the Python one, tells you exactly what is happening, is so short it's cryptic, and is ugly as hell.


That's excessively tame for C++. If you want to really see the difference, look at actual libraries. Tracing through libcxx is a lot of fun. Here's <optional>, for example.

https://github.com/llvm-mirror/libcxx/blob/master/include/op...


I'm sure there's plenty of uglier stuff on far bigger scales. But good design is all in the small details.

Looking at this one small syntactic difference explains a lot about why C++ is as ugly as it is. It has to pack in a lot of complexity everywhere, even at the most basic level, such as how to label a lambda expression. The ugliness may simply be a cost of offering much more power (simplicity is difficult/impossible when you need to support complex base abstractions). But it's still ugly. And therefore a big reason why it's not very Pythonic as the OP said.


Most of the ugliness there is coming from using "__"-prefixed identifiers (reserved for implementations) and macros hiding code specific to certain platforms or versions of the standard.


Even if those weren't directly caused by flaws in C++, a find/replace to remove "__"s and the five conditional macros does little to make this code less complex.


But Python 2.7 has this problem:

    def f():
        x = 0

        def add(amount):
            x += amount # UnboundLocalError: local variable 'x' referenced before assignment

        add(10)
        add(20)
        add(30)

        return x


    print f()
In C++, this problem doesn't exist.


The question is the cost of supporting it (assuming this is a problem needing fixing) as it has far reaching implications in all of the added complexity to all the abstractions layered on top of it.

I'm not making a good/bad argument that one is better than the other. The two languages have widely different usecases and rationale in the language design. But I'm not convinced C++ is (or can become) very Pythonic given the complexity it must support at all the basic levels.


Python 3 release date: 3 December 2008

C++11 release date: 12 August 2011


True, but in C++ the problem never existed.


This is fixed in Python 3 with the nonlocal keyword. In Python 2, the idiomatic work-around is to have x be list of one item:

    def f():
        x = [0]
        def add(amt):
            x[0] += amt
        add(10)
        return x[0]


[] marks the beginning of a lambda. In which declares whether variables are passed by value(=) or reference(&).


It became my next favourite language in 1993, after I start looking to a Turbo Pascal successor outside MS-DOS/Windows and C failed to impress me.

Don't use it as I used to, but the improvements in the last years, specially after the introduction of AST libraries via clang, have done wonders to the eco-system.

Now we just need C++ IDEs to catch up with Energize C++ and VA C++ v4, on their Lisp/Smalltalk like experience.


IPython has always been my favorite interactive UI, so if I had a ton of money to blow on tools, I'd love a more polished Cling (syntax highlighting, intelligent auto-completion, inline docs, emacs/vi keybindings)

https://github.com/vgvassilev/cling


Thinking about it now, I think we could get 90% of the way there just using python-prompt-toolkit.

https://github.com/jonathanslenders/python-prompt-toolkit


The language has definitely come a long way. Coroutines are definitely a useful addition to clang. It was my #2 choice for writing a middleware for Elasticsearch, however I ultimately chose Golang because I didn't want to deal with CMake.


CMake isn't the only build system in town, not being able to use an expressive language due to a specific build system seems a bit too much.


The struggle with bad tooling is real.


I would understand dropping C++ for Swift, Rust, D ..., not Go.

Borland C++ for MS-DOS had pre-processor macros for generic data structures, back when ANSI was still trying to figure out how templates should look like.

On those days, Java or Oberon derivatives as alternative made sense, because templates weren't even a workable feature in 99% of C++ compilers, so most of us didn't had real exposure to them.


> CLion is becoming quite good (if you succumb to CMake)

As someone who rarely dabbles in C/C++, but still has some CMake under his belt... What's wrong with Cmake?

I must admit for the use-cases I had, I found it quite a improvement over regular make.


Nitpick: despite its name CMake doesn't really replace make (actually you can use it to generate Makefiles instead), it's more of a build configuration system like autotools.

As for criticisms of CMake, when I used it I thought it worked pretty well but I think its CMakeLists syntax is atrocious. Not that it's not an improvement over the autotools, but at least these ones have the excuse of being built on top of a mountain of legacy subsystems with various quirks.


I would love it if one day you could put a CMakeLists.xyz in the project instead, where xyz was just a slightly nicer syntax, but with the same semantics, and compatible with regular cmake. Tbh I'm not sure how I would want xyz to look though.


> I'm not sure how I would want xyz to look though.

I'd want it to look like a standard scripting language of some sort.


really premake syntax looks nice https://github.com/premake/premake-core/wiki/Your-First-Scri... and it actually uses a real programming language :-)


Cmake is acceptable for distribution, but not for ease of development, in my experience. There should only be a single step needed to generate a correct build, no matter what has changed. Instead, some changes require running make (e.g. modifying existing files), while others require running cmake then make (e.g. adding new files, or changing header file includes). Why would I use cmake when I could just write a makefile to handle everything in a single step?

Granted, I've been moving more towards scons lately, as python is a much easier language to work with. My usual SConstruct will check whether I am making a python extension module, and will download pybind11 if I am. That sort of thing is still possible with make, but starts getting more painful.


Having to run cmake before make is usually avoidable.

If you aren't using globbing but are specifying source (and header files) explicitly, just running make will do the right thing. Just running make also checks if you have modified CMakeLists.txt and re-runs config generation.


Yes, don't use globbing to find source files! CMake doesn't record that the file list came from a glob operation, so it doesn't set up the dependency checking to handle it. (Not all targets support this sort of thing anyway. I bet GNU make does a good job of it, especially if you use gcc -MMD and pattern rules; I rather doubt Ninja supports it; Visual Studio definitely doesn't.)

With manually-specified source file lists, using the Ninja, GNU make, NMake and Visual Studio generators, I have had to re-run cmake by hand no more than a few times, each time seemingly due to a bug somewhere. (Possibly my fault... I can't remember having had to do this with my current project.) Aside from that it's good at detecting when relevant files have changed, CMakeLists.txt included, and reinvoking itself as part of the build.

CMake has plenty of horrid parts, and there's probably plenty of reasonable workflows that it doesn't support, but I have to say that I've found the day-to-day workflow perfectly satisfactory...


That certainly sounds like an issue with cmake, rather than with globbing in general. With a properly written makefile, or with scons, there is no issue whatsoever in globbing to find source files. This is one of the main reasons that I have stayed away from cmake, because needing to specify an explicit list of all source files is setting myself up for errors in the future.


What's probably happening is that most people don't mind, and the current behaviour maps neatly to the way most IDEs behave. And so there probably isn't the will to change it. CMake does have an issue tracker: https://gitlab.kitware.com/cmake/cmake/issues


Autotools also requires you to explicitly enumerate all of the source files that end up in your build, although it also properly sets up Makefile dependencies so you don't have to re-run autotools when you add a new source file (like you do with CMake)


You typically don't have to do this with cmake either. The build depends on the CMakeLists.txt file (and the files it depends on itself), which is where the list of source files is stored.


That's a pretty big if. If I have a standard layout with src and include directories, then I expect everything within the src directory to be included, which requires globbing. If I add a new file, then there is no reason to need to edit the CMakeLists.txt as well. Similarly, header files are explicitly listed in the source files. Having them duplicated in the CMakeLists.txt only introduces potential for error.

In my mind, anything that needs to be specified explicitly is an opportunity for error.


I'm sorry; didn't mean to imply there was anything wrong with CMake, meant more as a joke and to suggest that CMake is a new tool that requuires a learning curve, like anything new, hence people resist, until it becames so prevalent that one gives up resisting and invests the time to learn it. In my case, it was to get CLion's external library interface working so I could jump to definition in headers in /usr/local/include


> What's wrong with Cmake?

One line summary:

    set(MYVAR 1234 CACHE FORCE PARENT_SCOPE)


Well you're trying to mix up two different function signatures...

    # Sets the MYVAR variable in the cache
    set(MYVAR 1234 CACHE STRING "" FORCE)
    # Sets MYVAR in the scope above the current level
    set(MYVAR 1234 PARENT_SCOPE
Though cmake is definitely not a "pretty" language... Don't get me started about lists.


Sure... I should have just quoted the documentation, because the point is that any of these concepts exist:

    set(<variable> <value> [[CACHE <type> <docstring> [FORCE]] | PARENT_SCOPE])
> Don't get me started about lists.

Yes, by no means exhaustive!


Cmake is more of a replacement for autotools than for make.

Advantages: Good Windows support.

Disadvantages: Dictates the directory structure, much less flexible than autotools.

If you know shell, an existing autotools project can be modified easily.

If you want to do something special in cmake, first you need to go to Stackoverflow. If your are lucky, the thing you want to do is supported (often it is not).

All in all, I feel locked in by cmake - to the point that once the build works I'm less inclined to refactor because the directory structure cannot be changed easily.


would you recommend a book/resource to learn modern CPP from?


Tour of C++ by the creator of C++ has been kept up to date, short, quite accessible even for a C++ beginner, but current.


C++ Primer, 5th Edition is good for beginners.


Effective Modern C++ By Scott Meyers is good for beginners


Please don't. Not for beginners and not for veteran C++ programmers that want a refresher. I've been polyglot for the last 20 years, and as I've expanded my skillset through the years, I've let C++ go 15 years ago. With C++14 popping up, I gave it a try and liked the feeling - so I've decided to first go through the agonizing process of refreshing my memory (agonizing because it hurts to see how much you forget, being once considered "an expert"), and then figure out what's the delta towards C++14 and learn it. Obviously I went straight to the "Effective X" series thinking it should be a no-bullshit intro for an experienced developer.

Acknowledging Scott Meyers is a great author, I'd say something morbid happened with this book. It feels that Scott simply bashes the language more than it introduces it - while he's obviously very knowledgeable and in the details he spots out glitches, pitfalls, death traps, and WTFs and keeps throwing these waves of "here be dragons" at you, that half way through I just quit reading it and quit attempting refreshing my C++ knowledge after my 15 year break up from it. You should get Stroustrups' "A Tour of C++" - it is a softer, more cheerful and proud book that'll motivate you (and god knows you need motivation with C++).

While everything he says is true, and I don't want to say "it's better not to know these things", since then half a year passed and I gave modern C++ another try _without_ this morbidness to follow my attempt. I've been happy following "soft" intros into modern C++, without the bummer parts. Truth be told I haven't bumped into what Scott has kept highlighting - yet.

So I would say Effective Modern C++ is for very experienced C++ developers with fresh memory of how the internals work.


As someone who tried to bone up on my rusty C++ from this book, I agree with every point jondot made above.

It's a disaster to try and understand unless you are already starting from a very strong point in your C++ knowledge.


Very well put.

This matches my experience too. I regret buying them.

The series is not for those who were used to, but subsequently lost touch with, c++03 and earlier.

I found it better to learn the modern version by 'porting' earlier C++ code to use modern idioms.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: