"To round up, C++ does not dictate about its tooling, which basically gives lot of choices and flexibility. But at the same time it is making it complex for beginners to come in to projects and start projects with it."
This was a big problem for me when I first started using C++. I don't remember it being too bad when I was just working on small projects and things I wanted to do at school. The problems started when moving past the "1-person" projects. Want to contribute to other projects? Chances are they all have very different build setups and configurations when it comes to building, testing, dependency management. That adds some friction even before you start taking a look at the actual code-base.
I understand that a lot of these problems can occur for languages as old as C++, but I wish the tooling was a little more opinionated and worked a little better at guiding newcomers into doing things in a nicer manner.
C++ is commonly used for large complex projects. The problem with opinions is eventually you come across something that the opinion will not allow. For a small project an opinionated build system makes things easier and you essentially never run across something that can't work. For very large projects that is not true and so you end up fighting opinions in some place.
Examples? As a maven advocate I've commonly seen this claimed ("we have to have an ant build because we need to do x/y/z custom thing") but I've never seen an example that held up under scrutiny. E.g. there's no reason any project would ever need a custom source directory layout. No project needs to run tests before compile (and if you really need build step x to happen before build step y, you can always separate them into separate modules and use the normal "module z depends on module w" functionality that any decent build system will have). I've never seen a case for a custom version-numbering scheme that holds up. I've never seen a case for custom VCS release tags where the benefits actually outweighed the costs. All the custom hacks that make for incomprehensible builds turn out to be replaceable with at most a few lines of code or, in very rare cases, a plugin that follows the normal rules of plugins in that build system.
Maybe one can't reorganise the source tree layout because to do so would break our custom tooling / integration systems / interface with proprietary vendor tools. We can't change version numbering schemes because our packages already exist 'in the wild' and can't be changed mid-sequence without causing both technical and user pain.
Fundamentally: the codebase has worked perfectly well this way for the last 20 years; what's the business case for changing it (at potentially significant cost) just to be compatible with XYZ new tool that everyone is rallying behind?
> Maybe one can't reorganise the source tree layout because to do so would break our custom tooling / integration systems / interface with proprietary vendor tools.
That hardcodes the source layout? I don't buy that a tool that's being sold for money would do that (precisely because there's no standardization in the C++ world; any tool that wants to have more than 1 customer will have to support more than 1 directory layout), and if the tools are internal then by definition you have the ability to make changes to them.
> Fundamentally: the codebase has worked perfectly well this way for the last 20 years; what's the business case for changing it (at potentially significant cost) just to be compatible with XYZ new tool that everyone is rallying behind?
Well, if you make adopting good tooling in C++ harder than switching to a language that already has good tooling, don't be surprised when developers do that.
> and if the tools are internal then by definition you have the ability to make changes to them.
I happen to work in a shop with problems that look somewhat like this. We have a non-trivial amount of legacy code, that we just don't have the resources or motivation to fix. In those cases, we've found that the cohesion of doing it the same (wrong) way everywhere is easier to work with than doing it the new way in some places.
> I don't buy that a tool that's being sold for money would do that
If you work with proprietory hardware (think FPGAs), if you want to use the hardware, you have to use proprietory vendor tools, whether you want to or not.
> if the tools are internal then by definition you have the ability to make changes to them.
This misses the point. Having the technical ability to modify the tool does not equal having the real-world-practicability, business-case-defendable ability to change the tool.
> Well, if you make adopting good tooling in C++ harder than switching to a language that already has good tooling
The point is that in a 20 year, n-million line codebase, neither is "easy", and the pragmatic solution is to do neither.
Usually this kind of refactor is something done as the very first task following the last release in a minor series and as the first step in a major new release. Going from version 7.4.32 to 8.0.0? Refactor that source tree, reformat all the code, and start over on the build toolchain! Best of all, make sure some junior dev does the actual checkin for the reformat.
In all seriousness, I think this is the only way. Yeah you may have forever to deal with it cross patching but let's face it, for big lumbering code bases, perfect backwards compatibility in the source tree is usually the smallest of issues, and the benefit of using standard tools far outweighs the cost of occasionally reinterpreting back-patches (if they are even possible.)
The benefit of using widely supported tooling is even greater for large codebases. Automated reformatting is easy, and moving code around isn't that hard if it's going hand in hand with a build system that supports it.
Staying in the cooking pot of ever larger proprietary and exotic hacks is how organizations grind to a halt.
> The benefit of using widely supported tooling is even greater for large codebases.
Your assertion really depends on your definition of "widely supported". Jumping from fad to fad, however, is only effective as a way to waste time.
>Staying in the cooking pot of ever larger proprietary and exotic hacks is how organizations grind to a halt.
Using established build systems for C++, whether autotools or even plain makefiles, ensures that the project doesn't suffer from bit rot and developers can focus om developing code instead of wasting their time trying out every flavor of the month. And do note that the main reason C++ hasn't adopted any official build system is that each and every flavour of the month ends up being very horrible and very hard to maintain.
Yes, including CMake. If a build system is known for telling essentially all users that they are doing it wrong, that means the build system is the problem.
As one (slightly abnormal) example that I've worked on. We would build the meat of the solution, then run our tests. Upon success, we'd generate language bindings (think swig), compile them and run API tests against those. The languages included C++, C#, Java and Excel 12 bindings (which required their own .cpp files). I can easily see this type of special case worm its way in to larger projects and be valid, yes.
> Upon success, we'd generate language bindings (think swig), compile them and run API tests against those. The languages included C++, C#, Java and Excel 12 bindings (which required their own .cpp files).
Sure, so you need support for a multi-module project. Any serious build tool will have that.
Back in the days when we used Maven, we had an involved asset building system. It was very hard to shoehorn this into Maven's fixed lifecycle phases (they might be customizable now, I don't know) to the point where we had a Maven plugin so powerful, it eventually made Maven redundant.
> we had an involved asset building system. It was very hard to shoehorn this into Maven's fixed lifecycle phases (they might be customizable now, I don't know)
They're not customizable, by design. I find people with this problem tend to be missing the option of using a multi-module project: if you need to build A, B that depends on A, C that depends on B and so on, the right way to model that in maven is to have those things in separate modules with dependencies between them. That way your dependency graph lives in the normal maven representation that anyone working on it can understand, and the within-module phase ordering continues to be what anyone working on will expect (compile before test before package, etc.)
> As a maven advocate I've commonly seen this claimed ("we have to have an ant build because we need to do x/y/z custom thing") but I've never seen an example that held up under scrutiny.
In my experience, this just means they don't know how to do custom thing in Maven. Maven is very configurable; it can do pretty much anything you want, as long as you're willing to throw enough XML at it.
And if you are throwing a ton of XML at it, and your pom file starts to look like a nuclear disaster, that's a good sign that the custom thing you're doing isn't such a good idea after all.
I completely agree, hence the reason I only talked about the experience for new comers. Maybe tools like conan[1], vcpkg[2] etc will help improve the situation for newcomers.
The strong the opinion the harder the unorthodox things are. The weaker the opinion the harder it is for newbies to get going. There is a trade off here.
I ran into this trying to get into it about 3 weeks ago. I'm so used to things like npm, ruby gems, go packages, pip that it felt like a huge task just to get something built or settle on a way for me to build mine.
Even grabbing libraries from github, I was unsure if I should grab just the headers and DLLs, or import the entire tree and mashup my build scripts with theirs.
I wish I had stayed with it since college but now I feel I have to reach for something like Go or Rust just to get something shareable in reasonable time purely because of the tooling, whereas I would really like to use the C++ language itself specifically for working with win32.
> Even grabbing libraries from github, I was unsure if I should grab just the headers and DLLs, or import the entire tree and mashup my build scripts with theirs.
Honestly this is something that apt (et al.) make so easy on Linux distros that when you move to Mac or Windows you just have hard time believing people still in 2018 have to _deal with this shit_ (installing libraries and their headers by downloading each one separately and running an "installer" and trying to figure out where it put everything)
Like, why. It's one of the principle things keeping me away from developing on Windows for the last 10 or 15 years, the absolute lack of standards with regards to dev package management. Every time I decide I need to get something working on Windows it's an incredible pain in the ass compared to Ubuntu. Oh, now go to THIS website and download this installer and run it, now this one, now this one. And figure out for each one where it decided to put the headers. Now copy the DLLs around, etc. (Yes, there is system32 and Program Files but installers don't adhere to this by any means, and discovery via a build system is not given.) It's just barely worth it, and people only put up with it because of the huge user base. Honestly, for small projects I've found it easier to install MingW32 under Linux and cross-compile instead of working on Windows, it's that much of a pain.
At least Mac has Homebrew and Windows has a couple of good solutions now I guess (chocolatey, who came up with that name), but they are not officially supported and that is sad. I understand, they don't want to pay an army of people to package open source software full time, which effectively is what keeps Debian and Fedora going.. but they should. I would say it's one of principle reasons then end up doing something like providing an Ubuntu environment on Windows .. it's not just that people wanted a Unix-like environment, it's that they wanted _package management_.
For what it's worth, I develop almost exclusively on Ubuntu, but built-in package management is no panacea.
You get one version of each package, and it is often years out of date. If you need a newer version you'll have to install it yourself and then try to navigate how to get your projects to use your local version. For any popular package that is still undergoing a fair amount of change it is likely that eventually some project you want to build will need a newer version than the one your distro provides.
So you still need a solution to the versioning problem separate from the distro packaging environment, at least for development.
Windows continues to move in the right direction with package management. Your comment "not officially supported" may not be correct depending on your definition of official. Package management is native in win10 with OneGet, and PMs like scoop have been improving in quality and reliability to the point that I don't even think about them anymore.
I do agree that there is a lot of room for standardizing tool chain and dependency install workflows, but the tools themselves are of high quality now imo.
Headers/libraries on an Ubuntu LTS may be years out-of-date.
Am I supposed to just download a tar.gz and do the configure/make/make install dance to /usr/local or /opt or something for each dependency that I need a more current version of?
Even worse - what if my deployment target is 3-4 years older than the machine I'm developing/compiling on?
I don't think anything is "solved" with a traditional UNIX environment. Not even close.
> Am I supposed to just download a tar.gz and do the configure/make/make install dance to /usr/local or /opt or something for each dependency that I need a more current version of?
No, you're (usually)supposed to stick with the out of date version rather than the shiny new version. Doing otherwise is taking on the responsibilities of a distro maintainer, usually not consciously.
> Even worse - what if my deployment target is 3-4 years older than the machine I'm developing/compiling on?
Either use the older version for developing or at least have a version of that environment for CI.
The chocolatey repo (https://github.com/chocolatey/choco/wiki/History) explains the name as a pun based on the previous package manager NuGet: "Chocolatey started out as a joke because everyone loves Chocolatey nougat (nuget)."
For building my (rather small) projects for Windows I use MSYS2 [0]; it uses pacman for package management and has quite a big library of ready binaries.
I've been using it too. It's getting better, finally, thank goodness ;)
Still a far cry from what is offered on Linuxes but so much better than nothing.
Gonna be very hard to do. For the last 15+ years COM is essential part of WinAPI, and too many things in Rust conflict with COM: ownership, OOP, virtual tables, inheritance, they all too different.
Not just in Rust, in all languages, actually. You only have 2 good choices for complex platform-dependent windows development: either C++, or a microsoft's language with COM support embedded deep in the runtime, like VB6 or VBScript or .NET.
1. A binary ABI for interoperability of components written in whatever language.
2. Reference-counting mechanism and "casting" mechanism (IUnknown).
MS has an article that shows how to build a COM component from the ground-up using only plain C structs containing function pointers (which is how vtables are implemented anyway: array of function pointers). What you think of inheritance can be implemented with a struct containing nested structs of function pointers. Etc.
It's not insurmountable, e.g., PyWin32 has utilities for both consuming and implementing COM components.
COM is quite a lot of things. Also threading model, date-time-currency formats, late binding/dynamic dispatch/scripting, type info both runtime and design time, IPC, RPC, security, and more.
> What you think of inheritance can be implemented with a struct containing nested structs of function pointers
What I think of inheritance in the context of COM is called “aggregation” and it’s more complex than that.
> It's not insurmountable
It’s not, but you’ll spend ~2x more time doing that compared to C++, and ~4x more time compared to VBScript or .NET.
Update: that 2x-4x is for implementation. For consumption C++ and .NET are close, but anything else is more complex due to FFI in between. C++ or VBScript don’t need FFI, .NET runtime is designed around many key parts of COM e.g. they have reused HRESULT’s and type info format.
I think Delphi and C++ Builder were only good enough while WinAPI meant “C API”.
JavaScript is OK for platform-independent development with electron. For platform-specific i.e. WinRT not that good, .NET is just better, and MS put the JS libraries in maintenance mode: https://github.com/winjs/winjs
Python was never good. I’ve tried using winapi through C interop a few times, didn’t like it at all despite what I did wasn’t even too complex.
Delphi and C++ Builder have first class support for COM and UWP. In fact, both had better COM support before VB migrated from VBX to COM with VB 6.0.
JavaScript has first class support on UWP. Of course WinJS is in maintenance BB mode. The future of JavaScript on UWP are PWA apps, with direct access to UWP APIs, no more need for WinJS, which was a Windows 8 library.
Python has supported COM with help from PyWin32 since ages. Hardly any different from VBScript.
When I search “Delphi DirectX”, I only find some Russian website last updated in 2009 and offering Direct3D 10.1 bindings. The newest one is 12, the oldest supported is 11.2.
When I search “Delphi Media Foundation” it’s better, finds bindings updated 4 years ago.
These COM-based technologies are among reasons why I sometimes still pick C++ even for new projects.
Also, in both cases just bindings are not enough, the frameworks are complex, need code samples, stackoverflow community, sometimes books, etc. There’re tons of resources on using these from C++, many from C#, very few for the rest of the languages.
> JavaScript has first class support on UWP
Win32 GDI also has first class support on Win10, just because it’s supported doesn’t mean it will go anywhere. I have doubts MS will continue with that support, they haven’t achieved much and need to do something else, e.g. include CEF in the OS.
> Hardly any different from VBScript.
The difference is that VBScript _is_ COM. All VBScript variables are VARIANT, all strings are VARIANT of VT_BSTR type, and all objects are IDispatch. The interop is not required, COM is already native to the runtime. I can write a COM server in a page long text file with *.wsc extension and VBScript inside, register it with regsvr32.exe, and consume from any language, e.g. `#import "progid:..."` in VC++. I’m not sure you can do same in Python.
So I talk about COM, and you move the goal posts about DirectX in particular, that only implements a subset of COM, which requires writing wrappers by hand, including in .NET, hence Managed DirectX, XNA and SharpDX projects.
Apparently you missed the news regarding PWAs in Windows and its support for native UWP. Again, WinJS got dropped, because it was tied to the Windows 8 programming model, which got improved on Windows 8.1, only to be replaced by UWP on Windows 10.
Win32 GDI actually has "we wish you were all using Win2D and DirectDraw" support on Windows 10.
> you move the goal posts about DirectX in particular
In my first comment I wrote that COM became an essential part of WinAPI. Excel runs on Windows but it’s not Windows. DirectX on the other hand is in OS kernel, mayor parts are dxgkrnl.sys. Same applies to media foundation, Windows shell and clipboard, WMI, and many other OS components.
> which requires writing wrappers by hand, including in .NET, hence Managed DirectX, XNA and SharpDX projects.
Right, and all except SharpDX are first-party i.e. made by MS. MS made even more wrappers, WPF, Direct2D and UWP are also manually written wrappers around D3D 9 or 11.
How many wrappers MS made for Delphi or C++ builder?
> you missed the news regarding PWAs in Windows and its support for native UWP
I’m a bit skeptical about the technology.
> examples of automating Excel with Python
Simple IDispatch not that hard even in pure C. It’s more advanced things that require better support for COM: servers, events-callbacks, data structures like arrays and dictionaries, etc.
Rust works great on Windows. I use it every day. Is there anything specifically you had in mind that needs to be improved? Off the top of my head there is some tooling work that needs to be done, but cargo and rustup handle compiling, versioning, and dependency management flawlessly imo.
I'm sure Rust works great :) But vcpkg was made by Microsoft and I suspect is rather easy to get working on that platform. On the other hand, I've used conan and had no problem e.g. getting FFmpeg working. And, thanks to conan's awesomeness, I can even make it work with clang on windows -- so that e.g. I can use all the latest c++17 features on a library like Boost.Hana and range-v3 (of Eric Niebler fame).
Someone else has mentioned vcpkg. Another project that gets you closer to Rust or Go in this respect is conan, which I really enjoy using. It's still not as polished as other package managers (e.g. maven), but for a lot of very useful libraries, on a reasonable platform (recent OS, recent compiler) and for most important libraries (FFmpeg, boost, abseil, range-v3, the list goes on...) you can just add a single line to your conanfile with the version you want and off you go.
And remember -- a lot of the complexity comes from historical reasons AND also allow you to, e.g. run on, say an Atmel 328p or other microcontroller. So the complexity is partly from having such a wide target area.
Have you looked at vcpkg? It's the simplest way to use third party libraries on Windows in my experience. Many popular open source libraries are supported out the box by vcpkg now.
I didn't but some of the libraries I needed at the time don't appear to be present. The reason I wanted to use C++ specifically was API hooking and the Windows ETW api. I don't see any of the projects like Detours present so unfortunately it wouldn't have helped me in this case.
It does look promising but because of the mess I'll probably gravitate to another language if I don't need to consume such a low level API.
You may want to look at Qt. Qt brings a wide array of standard tooling, starting with `qmake`. It also has tons of libraries far beyond just graphical ones (most people don't know that). If you go with Qt flavored C++, the "standard" tooling story gets better.
Tho, it still won't be anything like ruby or node. Some Qt projects opt for cmake over qmake, for example. But at least getting your project bootstrapped is much simpler/faster.
I went from 0 C++ work/projects to a quality production one right away w/ Qt Creator, their docs, and general bring-you-up-to-speed-on-modern-c++ docs on the web. I only had minimal experience from a decade ago in C++. Very approachable if you are familiar w/ other languages and are the reading type.
Completely agree, I had a similar experience. My team found ourselves liking the Qt approach so much that we started using it for headless projects too, including our server (which used protobufs). Qt is so useful.
CMake has fantastic support for Qt. Decided to add a Qt GUI to a personal project with an existing CMake build setup and the integration was very painless, just ~4 lines added. Finding the solution was a little tougher, however, since nearly all documentation assumes a QtCreator + qmake configuration.
I suspect a lot of people learn how to use some framework or library that happens to be written in C++. For me it was OpenFrameworks and Cinder during art school.
Make was pretty ubiquitous and standardized.. I know in the Windows world there can be a lot of chaos but building from source has been a method of installing software on linux for a very long time.
I do not understand the mentality of "opinionated" frameworks or tools being a good thing, and the trend toward said frameworks disturbs me greatly.
What people call "opinionated" is nothing more than something being "architected". Someone has made a bunch of decisions for you that down the road you have no idea whether or not it will actually be good for you.
Using these types of frameworks short-circuits the process of being able to understand the nuances of building software. It's the apartment building or high rise of software development.
It turns design of your system from a personal journey toward the solution of your problem into little more than assembling linking logs.
I can understand the argument for an opinionated toolbox, but not for wanting base tooling itself to be opinionated.
> Someone has made a bunch of decisions for you that down the road you have no idea whether or not it will actually be good for you
Opinionated also means that these decisions will be the same in other places/projects I may join. In the medium and long term, that advantage may easily offset the value of custom decisions.
Some subset of those decision are also things which just don't matter, like which side of the road to drive on— the value is in a critical mass of people all doing the same thing more than it is in the merits of either approach.
A lot of stylistic stuff like filesystem layout falls into this bucket.
Appreciating this lays out the boundaries for the the more borderline stuff. An example of this, IMO, is things like unit testing and mocking frameworks. Most are pretty similar, and offer similar functionality, but perhaps with slight differences in capability or approach to solving common corner cases.
You might have some highly-specific need which demands a particular tool, but it's likely that you're best accepting whatever is most popular in your chosen ecosystem (gtest for C++, nose/tox for Python, etc).
A framework that gives you all the choices that you have without it will suffer the inner-platform effect and end up becoming just as complicated to use as the thing it was supposed to help you with. The whole point of a framework is to make certain decisions for you, to make certain paths easier by closing off other possibilities. If you don't trust the framework to make good choices for you, you're better off not using it.
I'm curious, what would you consider base tooling in the context of a package manager/build tool? I don't think i made it clear enough with my comment, but I was thinking of tooling akin to Rust's cargo, OCaml's `dune + opam`, etc
This is why I'm so happy about Bazel lately! It cleanly solves a number of tooling pain points for me:
- Cross-platform builds mostly "just work"
- Package management, while not painless, is a tractable problem. You can teach your build how to incorporate libraries from tarballs, local filesystem, git repositories, et.
- Support for common toolchains (gcc, clang, msvc, android, ios) is built-in!
After the hellscape that is CMakeLists.txt, Bazel is such a breath of fresh air. I'd encourage anyone looking at C++ for new projects to give it serious consideration.
That's an availability bias. While cmake is common in commercial settings, I haven't seen autoconf used outside of open source. I've seen scons and meson both used as well at various companies.
Meson seems to be spreading pretty rapidly. After I was introduced to it a few years ago for work, I started using it as my default build system for C++ elsewhere and more C++ devs I know have started using it. I've seen it replace CMake and make at large companies for C++.
Meson is much easier to use and configure for C++, being quite obviously designed specifically with C++ builds in mind. It automagically does the right thing for setting up most C++ build environments by default with virtually no config or quirks you need to learn. It is built on top of Ninja, and generally very fast and efficient. The only real knock against it is that the documentation isn't as good as could be, and it is evolving quickly, but the fact that it often "just works" doing the obvious thing makes it less of an issue. In my experience, setting up a cmake environment has a significantly steeper learning curve and is more work.
The tradeoff is that Meson takes a fairly rigorous "one correct way" approach to how you organize your builds. It doesn't prevent you from doing anything functional but what it forces you to do is very sensible, minimizes the potential for issues in diverse environments, and reduces the degrees of freedom Meson has to sort out. A prohibition on unfettered creativity, advisable or not, in how you manage your builds has turned out to be a blessing in my experience because it codifies what are very arguably best practices in code and then makes it easy to implement them.
I'm fairly agnostic on build systems -- I've probably used a dozen -- but Meson, even in its relatively youthful state, is the best C++ build system I've used so far in terms of getting things done quickly in a maintainable way that is easy to use for average programmers.
How would you evaluate Meson as a build system for C projects? I _think_ C and C++ build process is very similar, but I only work professionally with C, so maybe there are some quirks to C++ I am not aware of.
Basically, I wonder if you would recommend Meson as a build system for a mostly C project with maybe some C++ parts and some extra external tools ran on top.
I agree with your presumption that it almost certainly should work well but I haven't actually used it for a C project. Offhand, I can't think of anything that I would expect to break.
Has anyone done a comparison to show what the advantages and disadvantages are of each build system? Seems to me that most people pick one, and they stick with it without knowing much about the other two. In the case of automake, I didn't like it at all, and it felt more like it was geared towards open source projects that need cross compilation. For an internal large project, which is mostly what I work on, I'm curious to know what the advantages and disadvantages are of cmake versus Meson.
This is true. Whats even worse, is when you downloads some stuff on GitHub to play around and learn from, and then you get frustrated by lack of ability to build the darn thing!
I started learning C++ about 1993 or so, and I feel like I know less about it now than I did in 2001. Of course I haven't written a line of C++ in anger since about 2003, so there's been ~15 years of changes that I haven't kept up with. Part of me wants to dive in and really re-learn modern C++, but finding time is always the challenge.
Just for context, I was such a C++ bigot at one time that my car's license plate read C++HACKR and my personal website was on the cpphacker.org domain. But somehow I drifted into doing more Java and wound up mostly leaving the C++ world.
Edit: heck, this conversation triggered such a pang of nostalgia that I just re-registered cpphacker.org. Maybe I'll go back to having a personal website there. I've been thinking hard about setting up a new personal site and ditching Facebook anyway...
I've been using Python since 2008-ish on and off for various scripting tasks and small tools, I can't say that I've "learned" it. They keep adding new stuff, some things I forgot, others I never needed.
I've been using C++ professionally since ~2006 and for similar reasons I won't say that I master it either. Metaprogramming is a clear weak point for me, but OTOH I find the whole exercise pointless and unpleasant.
Unless one wants to be on the standards body, it's not really worth trying to learn C++ (or any other language) at 100% IMO.
It's worth pointing out one measure of complexity: the Stroustrup book (4th ed) is 1376 pages, and that was 2013. Since then there's been how many layers more on top?
As a contrast, the description of Common Lisp syntax fits on a sticky note, but the library and semantics spec take about the same number of pages to describe a similar amount of functionality.
The two extremes represent different locations on the syntax-vs-library complexity tradeoff continuum.
> The two extremes represent different locations on the syntax-vs-library complexity tradeoff continuum.
I'd say it's a syntax x language semantics x libraries triangle. Common Lisp may have trivial syntax, but there are some hidden gems of complexity in the language semantics level. To this day I don't really understand eval-when, and I've been using Common Lisp for the past 9 years.
Mastery comes when you know most of the 1st order stuff and can even anticipate the stuff you don't know because you also know the 2nd order stuff. In Java this is easy because after you get proficiency you can really dig in and read the JVM Spec and the JLS(1), and you have lots of great, powerful tools available to examine the running JVM, bytecode, etc. In other languages, this is a lot harder. JavaScript (ES6) kinda sorta has a specification(2) but it didn't start out that way and the jsvm doesn't have one (AFAIK). The implementation and the tooling for JS is widely varying, and typically includes lots of non-JS stuff like the DOM and networking - which is (really!) great but lacks focus. Somehow lots of popular languages exist without specifications, like Python, Ruby, PHP, Markdown. This is, IMHO, a very serious drawback because to become a master of these languages requires that you become a master of the concrete interpreter implementation, which is easily an order-of-magnitude harder and less direct than reading a specification.
>What language can one say that they fully master?
Incidentally, after working with node for a couple years near really knowledgeable coworkers, it's been the first time in my life as a developer that I feel I have a near complete understanding of the language I'm using.
Granted, most of that feeling probably comes from the small standard library (which is the root of several important issues, as we all know), and the feeling adds nothing to your life as a developer except for a vague sense of comfort, but it's still nice.
Sure, other languages are also hard to fully grok, and there are . But the consequences of not knowing (or misunderstanding) something in say Python is typically way less serious than in C++.
This is not all due to language design though, C++ tends to be used in some domains that are more tricky. Like where there are real-time constraints, on constrained devices and unattended operation.
I'm like you! And after I argued with someone on another thread here and they told me to watch CppCon talks I realized there are plenty (and plenty of new C++ features) that are not about template metaprogramming and that are useful... won't learn it at 100% but the experience has made me feel like I could improve my C++ every day.
But you don't hear people make the same sort of comments about languages like Go and Scheme. There's a continuation from complex to less complex and C++ is one one end of that.
No, I hear people complaining about writing boilerplate by hand (Go), not knowing with libraries to use, incompatibilities between libraries or political discussions about what the standard library should be (Scheme).
how about Java? it's a small enough language that one can encounter and use every language feature within a few years of professional development, and without needing to work in any obscure domain.
admittedly, Java got a lot bigger in Java 8 and beyond. but it's still the language I'd put forward as "most possible to master".
Spot on really. I've been actively using it for more than a decade, including a good portion of the darker things, and still won't claim 'I learned it'. I did 'learn how to use it for the particular purpose of some projects' is what I'd say. Or in other words: I'm fairly good at C++ but from time to time there's still Q&A on reddit or stackoverflow that makes me go like 'whaaaaaat?'
My thought exactly. I know a few people who have been using C++ for several years who go "Oh, you can do that?" every now and then. I guess it's a blend of freedom of design and programmers being a little stuck in their ways of how a certain problem is solved.
Yeah, I've been using it since 1992 and am now thinking of "relearning" it as well, having drifted off into C# and Python. It'll be interesting to see how things that seem natural in those languages feel in modern C++.
I've learned 15+ years ago, and I stopped paying attention around ~2010, which is just before significant changes were made. Even though I read up on some features and concepts from C++11 and C++14, I feel so disconnected from what's going on as if what I learned was a completely different language.
Can anyone recommend any source for someone with lots of pre-C++11 experience to get up to speed with current capabilities, and the current community?
EDIT: Thanks everyone for the suggestions!
EDIT2: I skimmed Anthony Calandra's list, and most of the new features seem indeed just cute, but not fundamental changes. Still, I want to reiterate the worry I expressed downthread: how much did the underlying semantics, the idioms, and the general way of thinking change?
I've been in the same boat--relearning C++ after using C++98(!) for a long time. My first go-to book was _A Tour of C++_ , 2e by Stroustrup. Also _Effective Modern C++_ by Scott Meyers (of _Effective C++_ fame).
Other books I've been reading:
_C++ 17 The Complete Guide_ by Josuttis
_C++ 17 in Detail_ by Filipek
More of a meta-comment, but the underscores you used to presumably get markdown italics should actually be asterisks on HN. Normally it doesn't make a readability difference but trying to parse the book names from your post was a tad mentally painful. Took me a little while to realize the asterisk v. underscore thing so I thought I'd pass it on.
Oops. Sorry. I had completely forgotten HN markdown. I usually use the underscores around book titles with cleartext in mind. I'll remember asterisk in the future.
Aaand you've just learned all the formatting HN has :). That, and URLs automatically turn into clickable everywhere except the body of a self-submission (i.e. submitting a text instead of a link). Any other formatting here is just convention.
I used to swear by Meyers in my old C++ days; his Effective C++ and More Effective C++ were my favorite software books. But recently, in the course of catching up with modern C++, I got the impression his ideas aren't considered good in the current community anymore. Can anyone comment on that?
Interestingly Meyers is now a "retired C++ expert". He says he can't keep up anymore with the current C++ standard development to properly maintain errata for his books.
I'll second A Tour of C++ 2e. It's a great way to get up to speed on the most important aspects of C++. It's very readable if you're already familiar with C syntax style languages.
If you already know the language flow control you don't need to read paragraphs of a book to relearn to how "think" in the language. You just need the "whats new" parts.
I'm not confident how deep the impact of the new features is on thinking in the language. I can learn the new features, but I fear I'll still be essentially writing C++03 with C++17 bells & whistles. Did the semantics and idioms change much?
(I hear the compile times are the same though -.-)
There's nothing really fundamental to the changes in C++11 and beyond: There are lots of little tweaks, as well as some deprecations and outright removals. However, there's the prevailing sense in the community that the many tweaks together genuinely change the flavour of the language, at least from the perspective of traditional C++ development.
The combination of "auto" (plus template type inference), ranged for-loops, initializer lists, structured bindings, lambdas and certain new standard library mechanisms (std::unique_ptr, std::variant, std::optional, std::move, etc.) together change the high-level expressiveness of the language, encoding a lot of common patterns and eliminating common, redundant code. For example:
std::vector values = {1, 2, 3};
for (auto v : values) { ... }
New for loops support structured bindings:
for (auto &&[key, value] : mapOfStuff) { ... }
There's a lot of little conveniences, such as the ability to define a default constructor, or accept a initializer list in the constructor:
struct Person {
string name
}
Person p = {.name = "bob"};
The forthcoming range library in C++20 also finally make lazy sequence iteration functional and composable, e.g. something like (off the top of my head):
getNumbers()
| view::filter([](int a, int b) { return a > b; }
| view::reverse()
| view::take(10);
Another big change that's coming is concepts, which are somewhat similar to Rust traits or Haskell typeclasses: The ability to put compile-time constraints on template arguments. For example, in classical C++, invalid template invocations generate notoriously bad error messages, since templates are only compiled once they have been expanded with their arguments:
template <typename T>
T add100(T v) {
// Compiler will fail on this line:
return v + 100;
}
auto n = add100("hello world");
Concepts will let you constrain T to something that actually supports the operations you need:
template <Incrementable T>
T add100(T v) {
return v + 100;
}
// Compiler fails on this line:
auto n = add100("hello world");
Then there's the contract spec, which adds support for Eiffel-type contracts.
Lastly, C++ designers seem to have thought a lot about safe memory semantics, and so there's a renewed focus on smart pointers (that actually work) together with move semantics and tightening of cop constructor semantics. The end result is that it's generally easier to work with RAII and avoid explicit heap allocation with the "new" keyword.
Wow, thanks for the detailed description! I guess I'll have to think a bit about each new feature from C++11, 14 and 17, and then Google some more, to understand the full implications.
I use it every day at work and there are days where I leave thinking its the best language on the planet and others where it makes me throw my laptop through a wall.
like let me read the source code of a open source software today
To me, this is the most powerful reason to learn C and C++. If you know these languages, you can read the codebases of all the things you use and contribute to them (every browser in existence, every Unix in existence, most user space tools, many GUI tools, many low-level libraries that mobile apps use).
Even if everybody stops writing C/C++ today, you'll still be using software written in C/C++ for decades.
And of course, that's a joke -- there's still at least 10x more C/C++ being written every day in the world than any other language that compiles to native code (Rust, Go, etc.)
He has no regrets, but does have a basic_regret<char, regret_traits<charT>, allocator<charT>>
Welcome! As someone who’s been using C++ pretty much continuously for two decades, I think there has never been a better time to learn it. We finally have compiler competition that has resulted in robust support for the language’s great features.
They all may have excellent C compatibility, but this is not the point. The point is that without C compatibility C++ could be smaller and simpler, and yet still meeting the same goals and covering the same use cases. Objective C has Smalltalkish object system glued to C base. Fundamental C++ techniques like RAII not possible there. D has GC. C# runs in a VM. I would not call a language with any of those features very similar to C++.
OTOH if not for C compatibility we would not have this discussion, since we would most likely never heard of C++.
Well, Common Lisp is simple. Except maybe the Metaobject Protocol (which ultimately didn't make it into standard, but it sort of still is). And except eval-when. Eval-when is magic.
In modern C++ it is very much possible to write fast and safe code, but the tradeoff is now put into the amount of stuff the developer needs to know to be productive (Which in my experience isn't as bad as people like to go on about on here/reddit, even if it still pretty shoddy)
Thats a bit unfair. A recent headache for my C++ project was mixing audio buffers. The buffers are cached to avoid repeated reads, the channel count differs, the sample rate can differ, the output rate can differ, and mixing has gain limiters applied etc. Lots of buffers, lots of copying, lots of boundary cases (eg. sample #1 doesnt start until t=100ms, sample #2 ends before sample #1, etc).
I struggled for days getting all the buffer copying right (raw pointers to many interrim buffers). You may think that my problems were due to using "crap c++ in unidiomatic way". However, I challenge you to find a better, and as efficient method to accomplish the same goal. The end MixAudio() function looks like plain old C code from the early 80's, but how else can you tackle this problem in a modern way?
> I struggled for days getting all the buffer copying right (raw pointers to many interrim buffers).
It seems like you struggled with math (indexing logic) rather than C++.
> how else can you tackle this problem in a modern way?
You could try to write an iterator that takes into account sample rate, channel count, etc. That way the logic for handling buffer formats and iteration over samples is encapsulated into a single reusable component.
I'm not entirely sure - only just learning audio programming atm, but the way to approach would be to encapsulate then use encapsulation and ranges/iterators.
Efficiency should be roughly the same, especially given that you can optimise loops (and other code) with just as much access.
Similar experience here ... it took no time to learn c++ because it’s not the pre C11 language.
You can make whole applications without touching memory management with RAII patterns; and C++ APIs are usually not the Java OOP monstrosities everyone hates.
Operator overloading is just so damn useful . You can make very elegant APIs in C++ as a result.
I also used 2018 to learn and get proficient in C++.
Honestly, I don’t really like it. The only real reason for that is that using smart pointers is verbose, but I don’t want to do manual memory management where it’s possible to instead rely on RAII.
Ironically enough, though, C++ is probably the second of two languages I’d consider my “go to.”
If you're using smart pointers enough to be bothered by verbosity I think it's somewhat likely you're overusing heap allocation and / or not making enough use of standard containers. Explicitly dealing with heap allocation via smart pointers should be a tool you only reach for occasionally in most code. Heap memory management should usually be wrapped in another library class, either a standard container or a custom type and most non library code should generally be using stack allocation.
Well, I guess "proficient" in relation to someone who had started with the language 1 year ago. There is definitely still a lot left to learn. I don't intend to claim mastery, though.
I am learning Rust currently and it's fascinating. The concepts are very different and forces me to thing differently. I hope that Rust soon supports things like kernel programming, asynchronous networking, GUI programming etc. that C++ is so good at.
Rust already supports kernel programming, that is what Redox is. You can do async networking today on stable with tokio and futures or on nightly with futures-await and prototype async/await syntax. You can do GUI programming with gtk-rs.
Its pretty unlikely anyone is going to take on the monumental project of "write a software GUI kitchen sink suite like Qt, but in Rust" any time soon. Not because you can't, but because such projects are by their nature at minimum millions of LOCs.
Many here talk about the complexity of C++. I concede it's a huge language, and not a beautiful one. However, I wouldn't call it that difficult. Being a systems language, once you understand (roughly, sans optimization finesse) how the machine code is generated it starts making a lot of sense. Maybe C++ is even educational because so many nuts and bolts are showing. Personally I'd recommend some systems programming to everyone, but if thats not on your radar at all, maybe don't bother. C is also a fine choice.
Many programming languages try to have perfect symmetry and because of that, fail to be useful in the real world. C++ has constructs such as template specialization that allow symmetry breaking. Symmetry breaking is what makes nature beautiful and therefore I would argue C++ IS a beautiful language. C++ is complex because the world is complex. Just as a mountain is shaped by time and the elements, C++ is also shaped by decades of experience and change.
This is what also makes Bjarne one of the greatest language designers. Where most designers would have gone off making new languages, Bjarne has been polishing the C++ stone the whole time.
The complexity of the world arises from simplicity. All complex things in the universe in the end are just atoms.
Is it better to build a complex programming language that handles real world complexity?
Or is it better to build a simple programming language where complexity is an emergent property than in turn use that to handle real world complexity.
One issue of building a complex programming language is that if you build the language to handle one type of complexity, it becomes hard mutate or shift that programming language to handle a different type of complexity. This is essentially the issue with whats happening with the newer versions of C++.
"C++ is hanging around for a long time ( 34 years till date according to Wikipedia). It is powering some of the world’s oldest and stable piece of software written over all these years."
Autodesk AutoCAD (36+ years old) CAD software is one of those piece of software that is still widely used today and very likely used to design physical products many people use today from your toothbrush, office chairs, towering skyscraper buildings to parts of a jet plane you've flown in from your last vacation.
> "I have to pay: First and main concern being that the standard study materials for reading about C++ are not free."
I don't think this is more than a minor issue. If you can't afford it, the book is very easy to obtain online for free, either through piracy or a paid / free trial Safari Books Online subscription. If you felt so inclined, you could pay for the book after making money with the C++ it taught you.
I'll take this opportunity to plug Bo Qian's C++ videos on youtube if anybody is interested in learning modern C++. They are to the point, code-focused, don't waste time on basic syntax, and cover a lot of ground (lvalues and rvalues, move semantics, inheritance types, etc.). Only issue is they are only current to C++11.
I like that the author mentions the STL explicitly. The STL is one of the most beautiful libraries ever written. There are few language libraries that are as clear and well designed as the STL. Alex Stepanov is completely underrated outside the C++ community.
That is pre-C++11 and woefully out of date. To pick out the first two "defects" listed: std::auto_ptr was deprecated in C++11 and removed in C++17, and std::unordered_map was introduced in C++11.
It's kind of frustrating how much vocal (often outraged!) C++ criticism comes from people who haven't seriously used the language in years, if ever. It's just noise.
Like a few days ago I saw a bunch of complaints about how C++14 and beyond were adding so much stuff and it's just bloat or overcomplication. No! Using that stuff makes my life so much easier, whereas C++98 was indeed a miserable nightmare.
In particular, fully project-aware autocomplete is a baseline feature. Does emacs or Vim offer an autocomplete plugin that matches the power of IntelliJ, or even Xcode?
Is CLion worth the money? Other paid alternatives?
If you're comfortable with emacs, irony + rtags does pretty much all the IDE-type things I need it to. You can autoconfigure rtags based on your build system (supports basically all of them). you get variable type hints, function arg hints, contextual autocomplete (including template stuff), jump to definition, find references, you can fuzzy find and jump to methods/identifiers/symbols across your project, compiler warnings highlighted inline, it's a pretty good setup. I use the doom-emacs distribution which has it all configured and ready to go, im sure spacemacs also has a good setup
I haven't used CLion so can't comment on that, but I have tried a few other alternatives. QT Creator [1] has always worked very well for me. If you want to stick with vim there is YouCompleteMe[2]. You can also try using any editor that supports the LanguageServer protocol [3] and then use a C++ language server to provide cross references, hierarchies, completion and more. Microsoft has a language server plugin for vscode [4], there is also CCLS [5], cquery [6] or clangd [7] which can be made to work with both vim and emacs.
I've been coding heavily in C++ for now over 2 decades, and when I interview someone I typically can get a pretty decent picture of within a general 3 - 5 year range of how long they've been coding based on the various details they gave when answering the questions about the language. And yes, of course someone could have 10 years experience, or the same experience for 10 years and never grow, but in general I found I am pretty spot on.
So I agree, 1 year doesn’t even begin to cut it. You need more than book knowledge, you need the experience of watching your masterpiece crumble under its own weight (sometimes several times) or have the joy of hunting some obscure bug because you failed to recognize the trap you had set for yourself.
I certainly wouldn't want to scare anyone off from C++, because personally I find the empowerment is worth the pain. It just takes time to learn what to avoid.
>you need the experience of watching your masterpiece crumble under its own weight (sometimes several times) or have the joy of hunting some obscure bug because you failed to recognize the trap you had set for yourself.
Very good point, but that's by no means unique to C++. Maybe that level of experience ;) in a related language, plus a good working knowledge of C++ would be acceptable?
It's too bad he made no mention of the concept of undefined behavior in C++. When coming from any language with consistent semantics, UB is a nasty surprise that makes you seriously question how correct your code is.
I'm in the exact same boat as you. I've been mostly writing JavaScript for the past 2-3 (with some Go, and more recently Flutter).
I've started to learn some C++ a few days ago, mostly go get into lower level code and into game development (Unreal), and I've found it to be pretty great!
I've been going through A Tour Of C++ and it's a great primer on the whole language. I also read the Learn X In Y Minutes on C++ to get a feel of it before diving in.
What are some high quality, small C++ projects I could play around with and read the source code? I'd like to explore it in more practical use next.
I studied C++ during university days and after that I never got a chance to learn it seriously due to my job commitments as a web developer. I did work on C# but not C++ professionally. I always admired my friends who could write good C or C++ code.
So many times I wish to learn it seriously but these new editors for web development and Python made me lazy enough that I wish to have some environment and way to write and compile C programs easily.
I heard modern C++ is way better than what it was in early 2000. I want to give it a try, hopefully, I will be able to learn and implement Pointers properly.
I have spent my whole career writing C or C++. While it is true that I write code at a different layer of the stack than is common for languages like C#, I wouldn't say it is particularly admirable. There are intermittent cool things that I get to write in C (data structures, protocol stacks, etc.), but for the most part it is just plumbing. There are lots of cool things that people get to implement higher up the stack too.
I had the same problem and I decided on start building a few Qt apps for command line tools I had made. This worked quite well for me because it wasn't too complex or large, you tend to write in a 'modern' style (although the downside here is I guess that qt is also somewhat idiosyncratic) and you get a build tools so you don't need to worry much about that.
I forgot to mention that I did work on Both QT 3.x in the last decade. I liked it because I was working on VB at that time and was dying to find something similar for Linux(Was exploring RH those days). It did not help me to improve C++ though.
Machine code is just a tool to instruct a machine how to do something. Programming languages are for human-to-compiler and human-to-human communications.
While you are right, i see a lot of enginners making bad decisions for the right tools to use for a giving problem, based on psychology and emotional reasoning.
Things might even work in the end, but you can end with a weak project that wont scale or will crumble when faced with the destructive nature of time.
I know its a controversial topic, but i think a little bit of zen-budhism training of the mind can make wonders in making those sort of decisions, where the unbalanced parts of our psychology will eventually show up and cripple what could be a great thing.
I am an IT enginner, but i practice a lot of self-mind vigilance to understand my decision making process, to avoid making the most confortable decisions and going for the write answer, no matter how painful and expensive it may sound.
To give an example of this, i remember for instance CouchDB vs. MongoDB, where the concepts were layout in CouchDB, but it was done in Erlang, so as soon someone did almost the same thing in a language that suits a database development well like C++, the tool sky rocket and reach all of its potential.
The language i see this happening the most for instance is Python. Where you have great concepts and great enginners but where a lot of them shouldn't be using Python for that particular goaç. Again here i can use Mercurial as an example of a great tool in a bad fitting language (for the given tool, as Python might be a great fit for other use cases).
Hi, I guess I'm a little out of the loop... Why would someone generally have regrets? What language "should" be learned in 2018 instead of something like C++ (which it sounds like is becoming obsolete)
Rust in stable releases hasn't made any serious breaking changes since 1.0 in 2015. There were tiny breakages in edge cases caused by more or less bug fixes, in line from breakages you might get when upgrading GCC to a newer version.
If you find your dependencies are breaking:
• Keep a lockfile (Cargo.lock) committed in your project, so that your deps won't suddenly get updated to some newer, potentially buggy/incompatible version.
• If you're updating dependencies, also update the compiler. It's possible that new versions of dependencies require latest version of compiler.
1. Current setup and where to go learning (the books)
2. It also had nightmares in colleague with C++, but yes, C++11 fixed a lot of it.
3. Nice links to the other tooling.
What's there to regret? For purely edification purposes, there isn't a better OOP language ( though it encompasses other paradigms as well and isn't purely OOP ) since it doesn't have training wheels. You could hurt yourself with it but it certainly allows you to explore and learn.
Also, you could "learn" it in a year, but it takes you a lifetime to master it.
It also takes you a lot closer to learning C. Some of it doesn't directly carry over, and may even be deceptively similar without being the same, but for programmers coming from more modern languages, many of the big novelties in C and C++ are the same:
- Manual memory management and direct use of pointers.
- Emphasis on giving as much latitude as possible to compiler writers so they can generate fast code on obscure architectures.
- Separation of header files from object files.
- Explicit handling of object files and linking in the build process.
In fact, I think after learning C++ you should be able to read almost any C code with no issue, which is nice for being able to peek under the hood of the infrastructure we rely on every day.
C++ is not very canonical OOP. And by that I don’t just mean it’s multi paradigm nature, but it’s realization of OOP features. Using C++ teaches you mostly about using C++ and doesn’t really transfer across the paradigm.
I think that statement would have to be classified as inaccurate. using C++ teaches you mostly about general purpose programming and it's pretty easy to go from C++ to python to Java to bash, etc.
Nonsense. C++'s template system behaves unlike any other language (things like SFINAE and the techniques that use it do not transfer), C++-style RAII is relatively unusual, I don't think any other language requires the programmer to think about virtual versus non-virtual inheritance, the C-derived sequence point rules are obscure and much looser than most languages' evaluation rules, the exception-safety rules are unique to C++, most languages don't make you track the minutiae of a dozen different integer types with inconsistent promotion rules, textual macros are abominable, C++ has more obscure control flow constructs than most languages... meanwhile the language has no standard support for true sum types, garbage collection, unit testing, poor library tooling, a limited type system... which means you don't learn the general-purpose techniques that rely on these things.
If all you know is C++ you will be able to produce a limited amount of working code in a handful of C++-like languages, sure - but even in Python or Java or Bash you'll produce very unidiomatic code if you write in C++ style. If you tried to work in e.g. Smalltalk or Erlang or even OCaml you'd really struggle. It is not a good way to learn general-purpose programming, because the language is hugely complicated and most of the complications are C++-specific.
All the complications you list derive from the peculiar C++ language design principle of ensuring high performance despite sophisticated abstractions; popular languages are simpler because they stop at basic abstractions (e.g. C and Forth) or because they trade performance for elegance (e.g. C# and to a higher degree Python and Smalltalk).
C++ does more at a higher cost, and in recent standard updates the cost has been steadily decreasing.
> popular languages are simpler because they stop at basic abstractions (e.g. C and Forth) or because they trade performance for elegance (e.g. C# and to a higher degree Python and Smalltalk).
I'm not convinced. C++ offers some abstractions but is also missing some quite basic ones (no sum types, no true parametric polymorphism, polymorphic code can only be typechecked once fully expanded). ML-family languages offer substantially better abstractions at minimal performance cost or, in the case of Rust, no cost. Even for GCed languages my experience is that at practical levels of developer effort C++'s high performance on benchmarks is outweighed by the language complexity burden (e.g. Haskell code that solved the same problem as C++ code was not only shorter and more maintainable but also substantially faster; no doubt if we'd carefully hand-optimized every line of the C++ it could have achieved higher performance, but even unoptimized C++ took much longer to write than the Haskell solution).
> C++ does more at a higher cost, and in recent standard updates the cost has been steadily decreasing.
The big cost is the complexity of the language, and since the language rarely if ever removes anything standard updates usually add to that rather than reducing it.
ML-family functional languages, which you seem a fan of, also trade performance for elegance, with few fortunate exceptions in case elegance goes far enough to allow extreme optimizations that recover whole-program performance despite widespread inefficiency "in the small".
Significant bad parts of C++ have been deprecated or effectively replaced with something better; you just have to adopt the good way.
For example, the new initializer syntax allows for simpler and saner rules about type conversions and lookup of constructors.
> ML-family functional languages, which you seem a fan of, also trade performance for elegance, with few fortunate exceptions in case elegance goes far enough to allow extreme optimizations that recover whole-program performance despite widespread inefficiency "in the small".
I'm not convinced. Small-scale heavily-optimized microbenchmarks show C++ as at most a single-digit multiple faster than e.g. Haskell. I wouldn't expect you to achieve even that much over Rust. And like I said, I've yet to see such a performance advantage to C++ in a real-world scenario - quite the opposite. It's not always a tradeoff - sometimes one thing really is better than another thing.
> Significant bad parts of C++ have been deprecated or effectively replaced with something better; you just have to adopt the good way.
And ensure that all your libraries/tools/coworkers have adopted the good way, which you have no way of enforcing; at best you have ad-hoc linters that flag up some (but not all) bad practices.
I'd have to disagree with the transitions from C++ to python or java and bash.
The complication is C++ specific and moving to java, bash or python is simply removing that complexity. Hence moving to C++ is easier then moving from python or another similar language because in most cases you would be adding complexity.
OCaml and Erlang are too different for me to comment about a transition in the functional direction.
well, that's just like, your opinion, man ( big lebowski reference for those too young to know ). The reason I disagree with you is because I have never met a C++ programmer who only knew C++ and couldn't easily do work in some other language, but of course plenty of people know other languages and can't do C++. Yes, SFINAE and RAII - you can go for many years of working in C++ without writing much code using either one of those.
That is more like an example of Blub paradox in action: You can write in everything as if it were C, you can't write C as if it were everything else.
Except that it makes a point that is subtly different from the one pg originally raised in discussing Blub paradox: If you only care about shipping something, you aren't inclined to think about abstraction as a goal state, and that's what these other languages are doing: handling things with a goal of adding abstraction that for a C++ user is out of reach or requires considerable knowledge of the C++ standard. Users of Python or JS don't know what a pointer is, and that's intended by design, because for most applications it is an implementation detail and automating it away is desirable.
When most software was written in assembler, data structures more complex than an array were often avoided, because it was difficult to code and debug them. And if you were an assembly coder of that era, you might well say that you didn't see the point of structured programming, or that you could easily get the same effect in fewer bytes. And to some degree, you'd be right, because you would just design a smaller scope of application that makes those techniques viable.
Trying to write something resembling modern C++ style in x86 assembly, in contrast, would be as fruitless as a JS coder trying to use C++ like JS, since your abstractions wouldn't be there. You'd have to learn the thought process of a lower level coder and apply those strategies instead of the ones you are comfortable with.
> I have never met a C++ programmer who only knew C++ and couldn't easily do work in some other language
Did they try the languages I mentioned? C++-trained programmers might be able to write lowest-common-denominator code in other languages but they'll struggle to write effective, idiomatic code in languages that make significant use of pattern matching/sum types or polymorphism, or rely on extensive use of higher-order functions.
> but of course plenty of people know other languages and can't do C++.
I'd agree that C++ programmers can usually write Java/Python, Java programmers can usually write Python but not C++, and Python programmers can usually write Java but not C++. But the implication is not that C++ is more general than Java or Python but just the opposite: programming in C++ requires learning a lot of C++-specific stuff that programmers in those other languages don't bother with.
> Yes, SFINAE and RAII - you can go for many years of working in C++ without writing much code using either one of those.
WTF, RAII is the corner-stone of C++ development. And I miss it (deterministic destruction) in every other language I use.
These days I use C# in parallel with C++, C# has using statement, but it relies on IDisposable, and IDisposable itself is a kludge (just look at guidelines on how to implement it "properly").
It seems a bit odd to complain that C++ doesn't support garbage collection. I mean yes, it's true, but... part of the point of C++ is to give you control of memory.
Sure, but certain programming techniques become impractical without garbage collection, and a general purpose programmer would be expected to be familiar with those techniques. E.g. C++ programmers tend to just not learn graph-based models/techniques because they're not a practical way of working in C++.
I mean, yes, in C++ you have to clean up the memory yourself. That means that you need to know when to do so. That means that, for the graph nodes, you're probably doing some kind of reference counting, and it will be a bit fiddly to get right. If you do it in a base class, though, you'll only have to do it once.
I wouldn't call that "impractical" at all. (I might call that "reason to prefer a garbage-collected language for doing graph-based work", but if I needed to do such work in C++, I wouldn't be particularly daunted by the prospect.)
Fair point. I'd agree that a skilled generalist programmer should have no particular trouble doing graph-based work in C++. My experience is that monoglot-C++ programmers were less likely to learn those approaches because, just as if you need to do graph-based work then C++ is usually a less-good choice of language, if you need to use C++ then a graph-based model is usually a less-good way of solving a given problem (i.e. often the problem admits alternative approaches that are more easily accommodated in C++).
Hmm, not my experience. C++ was my intro to OOP but I was aware of "pure" OO languages like Smalltalk. I feel like my C++ OO experience has taught me a lot although I think it's valuable to study another OO language side-by-side. A lot of OO patterns translate between languages although the implementation in C++ is often overly concerned with static types.
I agree. I wasn't trying to say C++ is great for learning OOP. I was saying that C++ is the best OOP-style language to learn in general because it allows you to break things and it encompasses other paradigms. Ruby is great for learning OOP, but it is far less general and far more protective than C++. But I'm sure others will have their own preferences but to me the lack of training wheels in C++ almost forces you to learn. Otherwise, you simply won't progress. I guess that could be a good thing or a bad thing as it might force some people to quit early on. Ultimately, I don't think learning C++ or anything language should be something you regret.
Unless your job demanded it I can't think of a compelling reason for a Node guy to learn C++ as their go to choice of OO language. Why not Java? Even Python is a good gateway to OO for folks coming straight form Node background.
Then again, why would you want to choose a language based on whether it's "OO" or not? OOP is just a way of structuring code that fits well in some domains, and fails spectacularly in others[0], and that happens to invite a lot of philosophers (the same way FP invites a lot of mathematicians). It's better to think about the capabilities the tool gives you - what kind of software you can write, what kind of software that language's community writes, and how much it helps manage complexity.
--
[0] - Everything I worked on, from games through embedded to web development, was a poor fit for C++/Java-style OO, but I suspect there are some domains where OO modeling is the best way of looking at things.
Wouldn't the main definition of OOP be defined as merging data and functions into a single unit? When using libraries, I find this concept unavoidable for UI and games.
I agree though, that having the entire program be a graph of objects is actually usually the worst pattern.
> I agree though, that having the entire program be a graph of objects is actually usually the worst pattern.
This is what I was primarily thinking about, and this is what is most written about in OOP books. OOP is a big bag that wraps itself around a lot of things, and which appropriated quite a lot of concepts. The concept of gluing together a bunch of data and code in order to treat them as a single entity is indeed useful (at least for imperative code), and while it is the foundation of OOP, I don't think it's really the distinguishing thing about various OOP approaches. For the class-based OOP, it's the composition of data+behaviour, encapsulation, polymorphism and inheritance that together create a particular philosophy - one that I find much less useful than advertised.
The building blocks are no doubt useful - a proper type system is great, but you don't need classes and objects for that. So is polymorphism, and again, you don't need Java-style classes to get method dispatch (see e.g. Common Lisp multimethods for an arguably better way of doing this, and one that doesn't even treat methods as parts of classes!).
I'll concede that OOP approach fits UI libraries unusually well (though you can hit some conceptual roadblocks there too; I'm not sure I've ever seen a good OOP design of tables, nor do I know how to design it well). But then again, I was recently writing some React code in ClojureScript, and it turns out that functions and plain data can handle building stateful UI components well too.
What I mean by saying that OOP appropriated things - I've seen people thinking, and even myself I used to think, that "abstraction" is something that a class creates, and is what you achieve with OOP. I gradually grew out of that belief, and I vividly remembering that reading SICP made it finally click in my head that quite a lot - if not most - of the software engineering practices discussed and attributed to OOP are in fact more general concepts applicable regardless of your programming paradigm.
> Not really. In fact in Lisp OO is the opposite, the functions are explicitly keeped away from the data.
Common Lisp really opened my eyes here. Initially it felt weird to have methods[0] living completely independently from classes, but over time I realized that where classes and objects implement nouns, generic functions and methods represent verbs, and in a language the verbs are an independent domain from nouns, representing their own generalized concepts that's unrelated to the taxonomy of nouns.
--
[0] - A "method" in CLOS is an individual implementation of a "generic function". So e.g. you could have a generic function `(defgeneric draw (device figure))`, and then specific implementations dispatching on any combination of arguments; e.g. `(defmethod draw ((device printer) figure)` to draw any kind of figure on a specific device, or `(defmethod draw (device plotter) (figure circle))` to draw a specific thing on a specific device, etc.
Yes. Note that `this` is just syntactic sugar; a call to `obj.method(arg)` is, underneath, essentially a call to `method(obj, arg)`, with the first argument always being hidden, and the only one that's a subject of a method dispatch (polymorphism).
Common Lisp gets rid of implicit `this` by making all arguments explicit, and by not restricting polymorphism to the first parameter - in fact, you can do a method dispatch on any of the method parameters, or any combination of them. See https://news.ycombinator.com/item?id=18848384 for an example.
I think your "for a Node guy" is doing more work than "Why not Java?". I mean, it's true that if you don't work in the world of low level performance thought that C++ doesn't bring much to the table.
But to be clear: Node and Java (and anything with a managed heap) doesn't have a paradigm for things like RAII or inlined parametrazation or copy-by-value or move semantics because it doesn't have a way to express them. But those are useful tools and important in some regimes.
I can think of one: C++ is Node's extension language (being the language V8 is implemented in). If you want Node to do things not already built into it or talk to a library that doesn't have a package available for it already, you have to use C++.
c++ is not an OO language. C++ is a tool to get things done in the real world using whatever paradigm makes sense for your problem. Sometimes that is OO, sometimes not.
C++ is very messy, but for the most part you can ignore the messy things.
There are three sets of primitives I can recommend for you to get some experience with: the Turing machine; the Lambda Calculus; the Lisp primitives (CONS, CAR, CDR).
If you want to do systems programming then you need to learn c,c++,rust, maybe go. I mainly write Java/Kotlin and learning C++ and Rust have really helped me learn things about computers that Java hides.
The Node/JS ecosystem is such a steaming pile of garbage, I can honestly see with great clarity why people should be running from it back into the arms of C++[11,17] ..
You shouldn't make broad statements like that. In my personal experience, I have found Javascript much easier than C++. If I want to install a package, I can just do "npm install <package_name". Also, 0-configuration bundlers like Parcel make building complex projects very easy.
Now, I'm sure some people have an easier time with Makefiles, but I've found it even easier to get up and running in Javascript. Most of the tools in Javascript (create-react-app, Webpack, etc.) are very user friendly, making setup trivial.
It is best not to speak with such authority/certainty that C++ is easier to use than Javascript.
I'm reading this because I'm about to move back to systems programming after a 6 year journey into Javascript Nodejs full-stack developement.
Don't underestimate the pile of garbage JS indeed is! Yes, you can install a package with ease, you'll have to do that with about 100 packages that keep changing all the time with all kinds of breaking changes (not talking about their 1000's of dependencies). I'm also sick and tired of the hipster JS community where every piece of shit can become a hype. With JS you'll be forced to work with things you hate. Almost all codebases I have to work with are horrendous piles of rubbish that often need to be completely rewritten from scratch. Almost everything you write doesn't last. You've spent a year learning Backbone? Just throw it away, now it's Angular you can start all over again. One year later? Stop doing Angular, it's React now, just start all over again! Hey, now we have some hipsters promoting Vue, they say it's the holy grail, just start all over again, it's fun! Flux stores? Fluxxor, Alt, Redux, Redux with Saga's, or just go with Thunk? It doesn't matter that much, it only lasts for 1 or 2 years! I'm completely sick and tired of it, including the fact that I'm only working on stupid e-commerce websites..
Talking about make files, a magnitude easier than trying to setup babel and webpack for a medium sized SSR SPA. I recently had to upgrade from babel 6 to 7, what a fuckin pain that was, so many changes, the deployment server refused to boot etc, etc..
I'm not too worried about the constant "framework hype". Although there are many frameworks, React is a solid option that remains relatively dominant. Yes, others exist like Vue and Angular--but I can be pretty confident that React isn't leaving anytime soon.
Similarly, with C++ there are also many packages to do the same thing. This summer I was using TLS and I found a variety of options, OpenSSL, mbedTLS, wolfSSL, etc. etc.
I do agree that it is tiring to learn some of React's accessories. I have never used Redux and when I do need some form of global state management, I think I will use MobX.
The complaints about code base quality and "stupid e-commerce websites" don't seem to be problems inherent to Javascript, although I suspect the problems Javascript solves are less interesting than the ones C++ solves.
All in all, I feel like C++ (for me) has been more difficult than Javascript due to a lack of standardization and the community being less beginner friendly.
That being said, I was trying to use a relative obscure feature (SGX) in C++, whereas with Javascript I stick to relatively mainstream applications.
JavaScript is a simple language that can be made extremely complicated via "simple" tooling. You can open the node_modules folder and see how sausages are made. :-)
C++ is dealing with essential complexities, there is no silver bullet:
>It is best not to speak with such authority/certainty that C++ is easier to use than Javascript.
Unless of course you have 30+ years of experience in the software business, have kept abreast of all the latest and greatest distractions from proper software engineering practices, have built a few hundred examples of such personally, and have no desire to throw more garbage at the fiery pile. As is my personal case. Node/Javascript are the Visual Basic of the 21st Century - this doesn't mean people haven't been productive with them as technologies, just that they've been productive in spite of them.
This was a big problem for me when I first started using C++. I don't remember it being too bad when I was just working on small projects and things I wanted to do at school. The problems started when moving past the "1-person" projects. Want to contribute to other projects? Chances are they all have very different build setups and configurations when it comes to building, testing, dependency management. That adds some friction even before you start taking a look at the actual code-base.
I understand that a lot of these problems can occur for languages as old as C++, but I wish the tooling was a little more opinionated and worked a little better at guiding newcomers into doing things in a nicer manner.