Hacker News new | past | comments | ask | show | jobs | submit login
Carbon Language: An experimental successor to C++ (github.com/carbon-language)
580 points by emidoots on July 19, 2022 | hide | past | favorite | 504 comments



If you are like me and wondering "What makes carbon different from Rust or Zig?

    1. The ability to interoperate with a wide variety of code, such as classes/structs and templates, not just free functions.

    2. A willingness to expose the idioms of C++ into Carbon code, and the other way around, when necessary to maximize performance of the interoperability layer.

    3. The use of wrappers and generic programming, including templates, to minimize or eliminate runtime overhead.
In otherwords, what carbon can do that Rust can't do, is take a C++ class with a `foo` method and call that method. Or create a class with a `foo` method and call that method from C++. Probably one of the biggest hurdles to get over in C++ interopt. Most don't do that, instead you'd make a C function binding and struct and move data/invoke functions through that.


Which is what D can do. D can (and does) interact successfully with C++ classes and member functions. Even templates!


D has also taken a big step forward in being able to directly import C code. You can do things like:

    import stdio;
and it will compile stdio.h with the builtin C compiler, making all the declarations in it available to your D code.


Honestly, the only thing lacking with D (for me) is the lack of good support from Qt (which I think is something Qt should tackle by making the moc multilingual). But I agree with Walter. D already did what Carbon is trying to do, but the syntax is easy enough to pick up in a weekend.


> Honestly, the only thing lacking with D (for me) is the lack of good support from Qt (which I think is something Qt should tackle by making the moc multilingual)

it's possible to do Qt without moc even in C++ with https://github.com/woboq/verdigris/, why wouldn't it be possible from D ? it should be even easier considering that D traits allow reflection of member and function names, etc.


> it's possible to do Qt without moc even in C++ with https://github.com/woboq/verdigris/, why wouldn't it be possible from D ?

You're talking about an entirely different thing. While OP was referring to the current state of D's ecosystem and the impact that missing key frameworks have on hindering adoption, you're arguing about the theoretical possibility of writing a framework with a language, which really does not address OP's point.


No, you are misunderstanding their point. If the problem of using Qt from D is that you need the MOC, then the fact that you can work around the need for MOC and use Qt without it seems quite relevant?


The initial complaint was that it did not have "good support." I think it is fair to say that having to spend a significant amount of effort to work around a lack of support is not "good support."


Indeed, that is what I meant. I would like to see Qt allow some sort of language-abstracted moc so I can just install Qt and a set of Qt bindings and then use them. Just because I can work around the moc doesn't mean that I can easily and productively use Qt from D.


But .. how would that work ? What does "language-abstracted" means for something which is specifically about a language ?

E.g. moc in c++ looks for your classes with a Q_OBJECT macro to generate the matching reflection & metaobject data in a .cpp: how does that work in a language that doesn't have preprocessor macros, or maybe even classes, e.g. Scheme or some BASIC dialect ? In addition, moc is only necessary for languages that do not have proper reflection & code generation facilities - if they do, it's entirely unnecessary as the metaobject code can just be generated in-language as part of the bindings you're mentioning. E.g. consider the python Qt bindings: they don't need a moc. Same for D.


It is possible, but nobody did it.


dlang would have been a very serious contender to C++ had it been fully nogc, stable & lean. Also dlang unnecessarily suffered low adoption in start due to competing stdlibs, trying to be both Java & C++ at once.


I am still a big fan of both dlang & pascal & very hopeful of dlang's Bright future ahead.


+1 to Pascal here! Underrated language.


If you use the @nogc attribute, D is fully nogc.


Yes it is very useful addition. Hopefully stdlib will also be fully nogc soon. Of all available options I am most hopeful of dlang. While it doesn't still have taken off it still is improving a lot & has high chances of increased usage. (Python also took years).


It doesn't need to be fully nogc. Not using the parts that use the gc will cripple nothing. You'll know which ones use the gc because they won't compile with @nogc.


@nogc is fine, if you're okay with not being able to use large swaths of the standard library, and by extension, most popular D libraries.


Not much of the library is dependent on the GC. For example, only one of the algorithms is.


There's no "nogc" containers in phobos, or allocators, or an idiomatic way to do safe manual memory management. It expects you to do it the C way. It's also impossible to implement some things because of how D does moving. There's a DIP in the works to change how moving works, but it's overly complicated and bound to introduce even more bugs. https://github.com/dlang/DIPs/blob/master/DIPs/DIP1040.md


Yes I use @nogc on main itself, to catch all gc usage.


Most C++ codebases would be exactly the same with or without a GC. Probably 90% of collective programmer-intuition about memory allocation is either completely wrong or from thinking about a scaling to a point that most products don't get anywhere near.


Probably not, or they'd just be written in Java.


There is more to performance than allocating memory. In fact some very performance-centric codebases do use a GC like unreal engine.

On the subject of games it always makes me chuckle when I see people complaining about garbage collection but then having 20 calls to malloc in their hot loop. All memory allocation is slow and not necessarily bounded.

I have written code that uses SoA, cache aware metaprogramming, inline asm, SIMD etc, with a GC because I knew I didn't need to allocate often.

On the subject of Java, I'm no fan of it but lots of projects would probably be fine - probably not quite as fast, but not horrifically so. On hackernews we only discuss extremely careful or expertly written code whereas in real life a lot people use C++ because it's what they got hammered into them at university 20 years ago as the fast language.

For example a lot of engineering and finance codebases are written in a very bad style of C++ code that would be improved by not having the programmers worry about memory too much - infrastructural code is more subtle, but most code serves a direct purpose like implementing some model. In these cases memory allocation is merely a means to an end rather than part of some grand strategy (i.e. it's not like writing a library)


I love when people bring up that UE has a GC, if it was written in Java, do you think they would have built a GC on top of Java's GC?

I guess it depends on the GC at that point, 20 mallocs isn't a lot but like the GC in D, it pauses all threads to do it's collection when you allocate. There's also a few games that use C# that have a really bad stutter because of the GC. There's nothing they can do about that though.


Unreal Engine mostly uses GC for game world objects. Only C++ classes that inherit from a special base class and opt-in are subject to GC. If they used GC for everything the performance would be much worse.


Sure, but I'm saying that most C++ code in the wild is probably firmly in the former category.


D doesn't do it very well. Carbon doesn't require you to write your own interface. There's also problems with extern(C++) not correctly generating the appropriate assembly. Lots of ABI bugs as it's rolling it's own implementation instead of using LLVM.


The syntax looks a lot like Rust, though. I'm surprised they made such a break when there explicit goal is to make migration from C++ as easy as possible. Also, Rust (from my biased point of view) is currently on its way to become the standard low-level language, so I'm not too confident that Rust-but-with-OO is enough of a selling point.


Considering the seemingly endless list of things that deliberately don't break with the c++ legacy, new syntax is almost the only change left. And if you were about to give c++ a syntax reboot, why wouldn't you look at what successful other modern syntaxes are doing? "c++, but in a syntax for people accustomed to rust instead of in a syntax for people accustomed to C" sounds like a perfectly reasonable approach.

Your perception (and mine) that rust is about to become the new default for "true native" is perfectly consistent with this, a language for the rust generation for when they have to deal with the c++ legacy. A legacy that won't be going away any time soon. I suspect that the author (authors?) wouldn't disagree at all with "use Rust when possible, Carbon when you can't", my perception (from a quick glance at the site) is that they are fully aware of the limitations of the niche they have so clearly staked out.


> I suspect that the author (authors?) wouldn't disagree at all with "use Rust when possible, Carbon when you can't"

That is in fact explicitly stated on the Carbon introduction:

"Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should."


Also on their FAQ https://github.com/carbon-language/carbon-lang/blob/trunk/do...

> If you can use Rust, ignore Carbon

> If you want to use Rust, and it is technically and economically viable for your project, you should use Rust. In fact, if you can use Rust or any other established programming language, you should. Carbon is for organizations and projects that heavily depend on C++; for example, projects that have a lot of C++ code or use many third-party C++ libraries.


The problem with C++ is it keeps getting better. There was a time when Rust was interesting to me but then C++11 came out. Then they kept improving it


Regarding C++. IMHO - it’s not getting intrinsically better. More complex. Easy things become a bit less verbose, but the hardest things remain as hard and the compiler is as unhelpful as before.

I agree C++11 and it’s successors are sugaring the language to be a lot nicer but fundamentally nothing has changed.

Rust is fundamentally better in it’s compiler warnings (they are actually helpful), and contains specific solutions to the things that are hard and bite you in C++.

C++ is not going away and it’s my main professional language but Rust does have features that are better.


> but fundamentally nothing has changed.

IMO, move semantics and lambda functions have fundamentally changed the way we write C++.

Also, let's not forget that C++11 introduced a formal memory model with cross-platform atomic operations and multi-threading primitives.


Fundamentally, it has smart pointers and std::make_unique now, which means I never have to touch a raw pointer in most of my code. They can even be adapted to wrap Win32 objects. That's a huge improvement and I don't mind using C++ for new projects now.


In my experience compiler error messages have gotten hugely better, except for template issues which are still very verbose. Especially with clang.


And template error messages should become much better with concepts in c++20.


C++ has virtually zero tooling and the committee is not interested in ever working on that. Comparing CMake to cargo is like comparing fifth century fireworks to the Space Shuttle. I mean we are getting modules that aren't literally copy paste maybe next year.


"C++ has virtually zero tooling"

I read that and I was like WTF, the entire programming ecosystem exists on C/C++ tooling. But from your perspective its modules/cargo that is the tooling?

That is IMHO an odd viewpoint. As someone who despises the way cargo works, and hates not having long term explicit control over my dependencies (going so far as to track and check them in along with build artifacts) I'm not convinced that the recent toss another random dependency that itself pulls dependencies into the build is a good thing.

I like the fact that I have three dozen+ different ways to do regexp's in C depending on my priorities, and that picking one requires cognitive overhead and modifications to source control/etc. Its easy to add a line to a makefile/etc to pull crap off github in C, so its not like this is a hard problem to deal with in C/C++ but its one where the scale of the problem allows for optimization. AKA like the dynamic typing argument, making the programmer think about a problem I believe yields a better solution.

Its also one where i'm not tied to the whimsy of the library author should I decide to fork or maintain the code long after they have gotten bored or rewritten it 3 different times. I can to this day rebuild code I wrote 20 years ago on a modern machine with little effort. Can you say the same about even 10 year old node.js or python code?

Put another way, I spend a little bit more on upfront effort and it pays off long term. And I know i'm in the minority, but its also why repeatedly I've run small teams of a half dozen or so people who's products are ahead of major competitors with teams of 100's+ of engineers.


You can do things the way you do them in C++ in Rust if you want. You could `cargo vendor`, you could fork a repo & depend on a specific commit, etc. It’s up to you have the self-control to do that though and not just shovel random dependencies into your project, that’s all.

Mostly you seem to be complaining that it’s just to add dependencies? (maybe put a ‘sleep’ in your bash prompt or something?) and maybe that the Rust ecosystem doesn’t have as many duplicate libraries as C++? (of course it doesn’t?)


The problem is that cargo gets in the way (as does rustc) if you try and use it in anything other than the simple import this dependency mode. I spent a hell hole of a day or two, trying to pick up a dependency outside cargo because it didn't behave in a way that meshed with the rest of my build environment.

So, yes it handles a couple of the basic cases, but then your out of luck, stuck in a version hell/etc because rustc and cargo are so tightly integrated and the flags needed to emulate some of the behavior with just the compiler are poorly documented and/or version dependent.

It dictates, you conform (which is basically the rust way).


The problem with the typical rants about C++ is that you guys are outdated.

With Conan you can consume over 1,000 packages directly from build systems and with way more control than what Cargo seems to offer. Take a look. I did use it for a while and compared to 15 years ago things are way better now.


The problem isn't that Conan exists, but that other competing solutions are just as popular which results in fragmentation (for instance vcpkg, and cmake can also directly pull in external dependencies now). In the Rust world there's just Cargo.


Not really, Android and Fuchsia have their own cargo replacements, that fit better on their OS SDKs.

Either that, or we get a box of surprises in each build.rs script.


Competing solutions are great for innovation.


In the world of build systems, I think most people want a little more consistency and a little less innovation. Sometimes a tool should just do its job and broadly resemble other tools in the space.


> In the Rust world there's just Cargo.

While not nearly as popular as Cargo (due to being blessed by the core team), Rust still has multiple build tools and package managers.


Fragmentation is not the problem. The future of C++ is to have metadata exchange, at least the way I see it.

This means that packages can be compiled with any build system, the one that fits you better. In the event that you cannot find a binary that fits you, your package manager can compile on the fly. There are also solutions to keep binary artifacts in Conan. This means that most of the time you can consume pre-made packages in vcpkg/Conan for casual consumption but you can still have binary repositories with fully customized builds for all your permutations of compilers and systems and debug/release. Do not forget this is a hard problem, we are talking to compilation to native, sometimes you need full speed and a custom compilation, not about portable Java bytecode.

But you can still set up and tweak recipes in Conan and upload to your binary repository (what we do at my company).

However, none of these things tie you to a build system, since conan can generate .pc, .cmake, MSBuild, XCode and way more metadata to consume those packages.

You can scale from simple to fully customized. For example before we had a libcxx that we compiled ourselves and pointed all deps and patched packages and could build something with like twenty-some dependencies with that customized libcxx optimized for us. Can you do that with Cargo? Serious question, I did not make extensive use of it.

However for simple uses you can just drop a conanfile.txt and conan install your profile and start to consume your packages. All of this works with CMake, but if I want to use Meson build system or SCons, Make and others I can do it as well. Look at the generators page: https://docs.conan.io/en/latest/reference/generators.html

I do not think fragmentation in build systems is even a real problem anymore because the "not fragmented but simple" philosophy leaves many things out of the box, such as compiling something custom when you really need it.

The system is very flexible. True that the learning curve is harder at first, but much simpler than it used to be in the past.

I can have full projects, not even with Conan, just with simple Meson wraps (Meson wraps are source-only dependencies though) pointing interdependencies to the same version of a library with a very reasonable amount of work: sometimes a single switch in the options, sometimes a small patch that you can keep in your subprojects/ directory or somewhere else.

I can easily generate .cmake and .pc files from Meson itself without trouble, which are two of the most used build systems. But if on top of that you build a Conan recipe then you can have many consumers of your packages for free.

It is nice to have a tool that is more ore less streamlined sometimes, but build systems is a difficult topic and probably there are some that do a few things better than others: cross-compilation, linux tweaks, generating VS solutions for you if your company uses VS, XCode or whatever. It is not bad to have a choice as long as you can interchange.

I would say it is even better than having the one-size-fits-all thing and later when you want to go to the real native stuff then you discover that your tool is too basic and cannot switch.


> C++ has virtually zero tooling and the committee is not interested in ever working on that

This sentence makes no sense at all:

1 - Tooling does not stop to the build systems

2 - The tooling set in C++ (and C) has been built over decades. It is indeed not a single shiny CLI cargo-style but it several order of magnitude bigger and more powerful than anything you will ever find in any other programming language: memtracers, profilers, thread sanitizers, debuggers, disassemblers, static analyzers, packages managers, bindings generators, ABI checkers, crash analyzers, loggers and I am sure I forget still many others.


Yeah what I meant was build system tooling, sorry for the approximation.


Most of the tools you mention work below the language level, and thus work automatically for non-C/C++ languages too (or at least LLVM-based languages).


They do work below the language level but have been generally explicitly made for C and C++ programs. e.g abi-compliance-checker

The other languages benefit of them currently because they leverage LLVM (also made in C++), because their interpreter/VM are made directly in C.

How is it already ? Standing on the shoulders of giants ? :)


> C++ has virtually zero tooling

CMake, Meson, Waf, Conan, Visual Studio Code, Visual Studio, CLion, Intel VTune, GDB, LLDB, XCode, Artifactory, SonarQube, clang-tidy, clang-format, astyle, Incredibuild...

> Comparing CMake to cargo is like comparing fifth century fireworks to the Space Shuttle

You are wrong here. Cargo serves a set of fixed "this-is-how-to-do-it" thing. In C++ you can build anything. I do not mean it is better, but C++ software already exists and that is the solution that it works better for it. :)

> and the committee is not interested in ever working on that

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p08...

Interoperability effort for modules: https://github.com/GabrielDosReis/ipr


Besides having lsp (language server protocol), Visual C++, CLion and others, I have been successfully using CMake/Meson paired with Conan and I can tell from experience that it is amazing what you can achieve.

Someone will say Cargo or whatever, which is nice and simple. But with these tools you can choose all details and optimizations as well, something that other tools just hide. But you can still keep it simple with a conanfile.txt for your dependencies and the stock add_executable etc. in CMake/Meson and get done with it. If you feel CMake is subpar (bc its syntax and DSL are terrible, I agree with that) then you can use Meson. It works pretty well.


Use Visual C++ and you will have modules today.

I have cargo today in C++ via NuGET and vcpkg, and what is great about it, I don't have to compile my depedencies from scratch.


> I have cargo today in C++ via NuGET and vcpkg, and what is great about it, I don't have to compile my depedencies from scratch.

To be fair, if you're using a language that has a reasonable compilation story, this is only every a concern the first time you compile.


Sure I always need an excuse to go out for lunch.


Hahaha you have zero clue. C++ has the best tooling in the world. From static analysers to best in class debuggers to best in class performance profiling tools etc. No other language comes even close.


Java has far more extensive tooling in all the categories that you've mentioned. I don't know if it's "the best tooling in the world", but if I had to make an educated guess, I'd bet on Java - and I don't even like it as a language.


Most of the tools you mention are actually language agnostic (debuggers and profilers). And static analyzers are much more important for C and C++ than other languages.


Given that gunpowder was invented in the 9th century, fifth century fireworks were probably pretty uninteresting...


It's impossible to deny that C++ is improving.

But it's worth investigating what you can do when you aren't tethered to wild inconsistencies and odd behaviors and workarounds because the community is terrified of breaking backcompat.


C++11 was where C++ lost me, and I went from saying I knew C/C++ to saying I know C.


C++14 was what got me in to Rust.

I'm still in the C++ world for work, and have kept reasonably up to date with things to C++17, but I'm sitting here looking at C++20 and wondering is it really worth the effort...


Precisely.


Doesn't C++11 precede rust?


Rust 1.0 is 2015. Hence this is sometimes referred to as "2015 edition" in Rust's edition system.

But the sort of people who are here to tell you about how great C++ is will say well, actually Rust existed all the way back to 2006 as Graydon's personal project. There were no numbered releases until Rust 0.1 (after C++ 11) and modern features like Traits don't appear until much later, but sure, in this sense Rust existed in 2006.


I had the same question as you so I checked. C++ 11 standard was released in 2011 of course. Rust development started in 2006 and Mozilla announced it officially in 2010.


> C++ 11 standard was released in 2011 of course. Rust development started in 2006 and Mozilla announced it officially in 2010.

Given that c++0x was a decade in the making, by that measuring stick the answer is yes, C++11 does precede Rust.


It's a bit weird to take the release date for one, and the start of the development for the other. By that measure, the first pre-alpha (!) release of Rust was in 2012; the first stable release in 2015. On the other hand, the first RFC for C++0x was in 2008.


Almost. C++0x did, but C++11 doesn't and some compilers/IDEs (e.g. VS) didn't implement full support for C++11 for years.


> The problem with C++ is it keeps getting better

This is debatable.

It keeps getting more complex. Whether all the additions are worthwhile remains to be seen.


every successive C++ standard allows me to remove a lot of code which is a net benefit


As far as I'm concerned, C++ is there for legacy purposes only.

There are some nice frameworks and tools using it, sure. Yes, you are required to learn it if you are studying CS.

Any serious new development today is done using more modern languages such as Rust (i.e.: the linkerd service mesh proxy [1] for encrypted pod communication in a k8s cluster).

As even the Linux kernel is slowly transitioning to using Rust [2], it's only a matter of time before an inflection point is reached and it goes mainstream (if not already.)

[1] https://github.com/linkerd/linkerd2-proxy

[2] https://hackaday.com/2022/05/17/things-are-getting-rusty-in-...


Hahaha wow. There are more than 5 million C++ programmers out there. Starting new C++ projects every day. I am pretty sure more new C++ projects are started every day than Rust projects. Everything (including Rust) runs on top of browsers/operating systems/drivers/compilers/VMs written in C/C++. Rust code is like a drop in the ocean compared. Maybe in 30 years that will changed? Don’t count on it.


Just learned that LLVM, GCC, CUDA, HPC, HFT and the whole games industry aren't serious new development.


.. is it new development?


I guess, unless you're into retro-gaming.


To be fair, parts of my work codebase are touching on 25 years old, and I work for a startup from 2020. The codebases we're building on have roots in the mid 90s, and the platforms they run on didn't support modern C++ standards for a very long time after that.


So anything including "windows.h" isn't really new development then? Sounds like a strange definition.

I'd say that there is a lot of new projects being started with C++, still way more than there are new projects in Rust, at least if you only count serious ones.


GCC is from the 80s, Unreal and Source are both from the 90s, and LLVM and CUDA are from the 2000s.

Yeah, I'd call all of those codebases pretty old at this point.


I mean a lot of these things rely on copying old code. AFAIK a lot of new games in bigger studios start by essentially copying the old engine into a new tree.


Nope. Ex. game developer here. New C++ projects and libraries are started every single day.


> I guess, unless you're into retro-gaming.

So new (same-)old development then?


> Any serious new development today is done using more modern languages such as Rust

Uhm, no. Sorry, thanks for playing, try again.


robotics is primarily c++ and python. that may change to being rust and python, but it will definitely take some time.


My impression as a CS grad is that very few computer science courses require you to learn C++.


what languages you learn at university highly depends on the specific school you are at, the professors, and which companies are donating the most money. it's certainly not a great representation of what is going on in the industry as a whole.


I'm studying Mathematical Informatics and we have C++ as the first language.


My impression is that C++ is used in the real world though. :)


Yes that is true but I'm responding to the up stream comment saying "you are required to learn it if you are studying CS". There is a common perception that computer science degrees are about learning things like C++ but my experience is that most emphasise more abstract skills like data structures, algorithms, complexity, formal program design, etc.


Not at all. Computer science and engineering (I think this is different degrees in US but in Spain it was a BS + Master when I studied) is about algorithms, data structures, big O notation complexity in algorihtms, client/server architecture, hardware and assembly, understanding all the underlying math, networking, even HPC and advanced data structures when you keep choosing specialized subjects, but not learning a single tool for the sake of it for programming.

In fact, it is the least relevant part when you are studying. You learn tools better after you end the degree IMHO.


It really depends on what bit of computer science you are learning.

C++ is definitely past its prime and has been surpassed in many areas by other languages but there are still a bunch of domains where it is the primary language.


How about almost all domains? Everything in the world runs on top of browsers/operating systems/drivers/VMs/networks/embedded systems etc. that are written in C/C++. Not to mention that the tools used to design chips and manufacture them are also written in C/C++. No other language comes even close.


Having C or C++ somewhere in the lower levels of your stack doesn't make them the primary language for your domain. Other wise CPU microcode would be the primary language of every domain.


In fact there is plenty that can be expressed only in C++. A person making effective use of the power of the language finds anything else a huge step down.

Anyone not having fun when coding C++ is doing it wrong.


Example of some well-written code that expresses things that can only be expressed in C++?


I'm far from the most experienced in C++, but friend classes/functions do not seem to have a direct equivalent in any other language.


In C#, nested classes are effectively friends with their outer class, and it's used quite often in practice.

On the other end of expressivity, in Eiffel, every class member is declared as visible to specific other classes; if you want something to be public, you say that it's visible to ANY, which is the universal base class similar to e.g. Object in Java.


NB: Friend-this and -that are not among powerful features of C++.


Because you don’t need them in other languages.


> Rust (from my biased point of view) is currently on its way to become the standard low-level language

The evidence seems to suggest the opposite. Other than a lot of talk on programming fashion publications that are always more aspirational than representative (such as this site) Rust seems to have reached 0.3% of the market [1], up from 0.1% a couple of years ago [2], and while that is ok growth, programming languages with few exception tend to reach, approach, or at least point toward their all-time peak market penetration around age 10, and Rust is already 7. Any language could, of course, be an exception to historical trends, but there's nothing to suggest that is the case.

More anecdotal adoption stories are just as bleak. Even at this relatively advanced age, many companies dabble in Rust — as they did in, say, Haskell — but not many established companies have yet to really bet big on it.

The only positive is that among the low-level languages discussed on aspirational sites, Rust is, indeed, the most talked-about language, but history also suggests that that is a very bad predictor of long-term market success.

[1]: https://www.devjobsscanner.com/blog/top-8-most-demanded-lang...

[2]: https://www.hiringlab.org/2019/11/19/todays-top-tech-skills/


> programming languages with few exception tend to reach, approach, or at least point toward their all-time peak market penetration around age 10

Notable exceptions from the links you've provided:

C#, Java, Go, PHP. All appear to have an upward trajectory today.

Javascript has also seen a similar penetration boost when nodejs came on the scene.

With rust looking to get integration both into the Linux kernel and GCC, that points to some pretty positive things for the language's penetration. Particular in the embedded world.


> All appear to have an upward trajectory today.

They might have an upward trajectory, but they're not posed to break well beyond their respective records. With the possible exception of Python, how popular a language is at age ten is a reasonable rough indicator of how popular it's ever going to be. At its current growth rate Rust would reach 1% market share at age ten. Again, there can certainly be surprises, but I think it's weird to say that actual current evidence clearly points to success for Rust. On the contrary, to become a success it would need to buck the trend and be quite a surprise. So it could happen, but I don't see much to support the claim that this is what's currently happening.

> With rust looking to get integration both into the Linux kernel and GCC, that points to some pretty positive things for the language's penetration.

I agree that it shows that the language is taken seriously and isn't dismissed as a possible option, and that that's very good. That indicates that the language isn't an immediate irredeemable failure, but I don't think it's an indicator of future success.


> That indicates that the language isn't an immediate irredeemable failure, but I don't think it's an indicator of future success.

I'd simply point to the fact that there are very few languages that have tried to get into the same space that rust exists in. Even something like D came with an optional GC which has pulled it out from consideration for things like the kernel or embedded devices.

When you say "most languages are peaked at 10 years" I'd simply point to the fact that rust is substantially different from most languages. It's not tackling the same market spaces. The ones it is hitting have been slow moving for a while now.


While others with GC are used without issues on embedded for the last decade when users aren't stuck on ways of the past.

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/wp/products-services/jamaicavm/

https://www.microej.com/

https://www.astrobe.com/

https://www.wildernesslabs.co/

Ah but that isn't serious enough.

I guess battleship weapons control might be something serious,

https://dl.acm.org/doi/10.1145/2402709.2402699


Major problem with GC languages is that it is horribly to link against libraries that ships with a GC. Lets say I want to use 10 libraries, if each has their own GC then my program is now running with 10 GC's each trying to optimize itself, which isn't a tenable situation.

So system level libraries has to work without a GC, even though system level programs can work fine with a GC.


> then my program is now running with 10 GC's each trying to optimize itself

Is this true with Java? I thought it's a single runtime that manages memory in all the code/libraries in your process.


For java the virtual machine provides the GC for everything, but then you can only use JVM libraries and not libraries in other languages.

The reason C and C++ libraries can easily be included in basically any other language is that they don't have a GC, so there are no such issues, just call the functions and things works fine.


> Major problem with GC languages is that it is horribly to link against libraries that ships with a GC.

I'm curious which GC languages do this. The most popular ones that come to mind for me are anything on the JVM, JS, and Go, and I've never heard anyone point this out about them.


Libraries work fine in GC languages if they're written in that language. But you can't easily take a library written in Java and import it into your Python project (you can do it, but it's not efficient, and the more languages you include the less efficient it gets). Whereas libraries written in C/C++/Rust/etc can have binding in every language without issue. Consider libraries like zlib or curl and SQLite that are ubiquitous is across every language ecosystem. Those are the libraries that need (well, greatly benefit) from being written in a non-GC language.


Every language with a GC has this problem. If you want a library to be used in both Java and Python you don't write it in either Java nor Python, you write it in C++ or another language without a GC and import that to Java and Python. Trying to import a Java library in Python or vice versa is horrible, since GC's are very hard to work with from outside the language ecosystem.


I don't know about Java, but it's trivial to import a .NET library in Python with a high-level mapping that takes care of everything GC-related automatically:

http://pythonnet.github.io/

I don't see any reason why something like this cannot exist for Java, if it doesn't already.


But this is exactly the point, it is possible, but once you import such library into Python, you are running both Python runtime and .NET runtime, both having their own GC implementations and many other processes.


But every language needs to get a foothold within its hype period. In a few years the hype for Rust will die down, at that point you wont get new blood unless they are forced to learn it so it will be hard to grow.

Programmers are forced to learn C++ since many jobs require it. Students were forced to learn Python since teachers choose it as a language at universities. You need some reason like this for a language to thrive after its hype cycle is over, and currently there are no such reasons forcing programmers to learn Rust and it doesn't look like there will be any within a few years either.


Except I didn't say "most languages are peaked at 10 years"; I said virtually all languages reached or neared the ballpark of their peak at age 10, from the most to the least successful ones, and those include C, C++, and D. I think Python might be the only counterexample. So it's always possible that some other language would be another exception, but there's nothing today to indicate that Rust is gearing to be such an exception or that it is substantially different from all the languages in all domains that have exhibited that behaviour.

I do agree that it's more likely that Rust is an exception than, say, Scala or perhaps even Go, but the same would be true for, say, Carbon. I.e. while the likelihood is higher, there's nothing at the moment that would indicate that's what's happening.

Obviously Rust has garnered more enthusiasm among PL fans than D ever did, and its peak would probably be higher, but technical enthusiasm and hype is very weakly correlated with long-term success in the world of programming languages. Most charitably you could say it's a necessary condition, but it is very clearly not a sufficient one.


Yes, Python is the only outlier.

Rust very much could still fizzle. If it does, it will be because it failed to change to be adoptable by more users. All existing users are tolerant of niche qualities. If it succeeds, the programmers who already know it will constitute less than 2% of the total, and the 98% will have picked it up after it changed to be more readily taken up.

If it fizzles, current users will always be at least 70% of the total.

So, if you want it to succeed, you will need to welcome the changes that can get it there.


I think all new languages have some challenges that would make it very hard for them to become as popular as older languages (none of the top five languages is under twenty years old).

For one, we're long since in an era of diminishing returns — new languages have a smaller ROI compared to incumbents than the incumbents had to the languages they replaced — while the cost of a switch is the same if not higher (because codebases are larger). So switching to a new language is just not as profitable today as it was twenty years ago.

For another, the market is more fragmented today than it was, say, 15 years ago. For example, while Java is not the only game in town on the server as it was in, say, 2003-2006, no other single language shows any signs of becoming as dominant as that either. Some use Python for server-side applications, some use JS, some use C#, some use Kotlin, and some use Go. There is no single language that people flock to and so Java is still the dominant server-side language (although for a while PHP seemed like it could be it). So even if Rust does gain actual traction, it's very unlikely to become the standard low-level language. Some may use Rust, some may use Zig, some will use C++, some will use C, and maybe some may use Carbon. The only new language around that seems capable of reaching a popularity similar to ~25 year-old languages is TypeScript.

I also think Rust has a fundamental problem specific to it, and that is that it's a complex, rich, language. Rich/complex languages have never been super-popular, and while C++ is arguably more-or-less as complex as Rust, it wasn't when it gained its market share (only to lose much of it shortly after).


Yet, C++ regained share by getting more complex.

Language complexity is a poor measure. Better is the complexity of using a language to achieve an aim. C and Pascal are pretty simple as languages, but they make it substantially more complex to complete a task, because they lack the expressive power needed to help any.

C++11 is substantially more complex than C++98, but much simpler to program with. Rust is both simpler than C++ (at surface level, because it leaves behind cruft C++ cannot) but more complex because it makes greater cognitive demands (to satisfy the borrow checker) and also because it lacks powerful features C++ offers. C++ is better able than Rust to package semantics in a library and deliver that to users on command. But Rust is way, way better in that way than C.


Yep. In fact Python really started taking off almost 20 years after its' release. People forget that Python turned 31 a few months back.


GP said the standard low-level language. Low-level code will always be a much smaller percentage of all code than the heaps of web apps our industry is shitting out on a daily basis.

Thus it would be completely reasonable for the de facto sysdev language to have a small market share overall(unless like for C/C++ there is a lot of legacy stuff to maintain) . Your sources don't shine any light on this at all.


You don't think comparing the current defacto standard low-level languages (C/C++) to Rust shines any light on the issue you raised? [1]

C/C++ : 6.17% Rust : 0.29%

This was addressed in one of the linked sources. What would you accept as evidence, then?

1: https://www.devjobsscanner.com/blog/top-8-most-demanded-lang...


Different metrics tell a different story. For example GitHub pull requests [1] are C++: 2.60%, Rust: 2.09%, C: 1.43%, with a clear trend showing Rust ahead of C++ next year. Or you could look at the Stackoverflow survey of languages used among professionals [2], which gives C++: 20.17%, C: 16.7%, Rust: 8.8%, with rust gaining 1-2% each year.

There's no best metric, they're all biased, you need to consider a few different ones. Otherwise you won't notice when you've stumbled upon one with with an extreme view.

Combining C and C++ in language stats is debatable, they should IMHO be measured separately. When grouped as a language category, "C/C++/Rust" is slowly becoming more common.

1: https://tjpalmer.github.io/languish/#y=pulls&names=c%2B%2B%2...

2: https://survey.stackoverflow.co/2022/#most-popular-technolog...


I assume 1 is pulling only from github.com public projects. Many companies A: don't make their code public, B: probably use github enterprise/other hosted solutions (esp including not git based)

2: Also skews towards a certain demographic


Yes, public github repos, stackoverflow survey respondents, devjobscanner offers, google searches, etc are all skewed in some way.

It's very hard to qualify the effect of those biases though: for example how does the public/private repo ratio differ between languages ? Good luck giving a trustworthy answer to that. Apart from looking at lots of different source kinds, one thing that's fairly trustworthy is the trend of a specific language in a specific source.

On that topic, looking at the "SO questions" metric of the first link, C and C++ both have a strange regular spike in the last quarter of each year. I attribute that to new CS students flocking to SO at the beginning of their term. Another fun trend to look at is the hourly google searches over a week: the weekdays / workhours spike is much more pronounced for some languages than others.


I think looking only at the number of dev jobs leaves too many potential confounding factors to be useful. For instance, how many C/C++ jobs are actually companies who need those devs to move away from C/C++? What's the overlap? How much of the C/C++ share is maintaining legacy code vs starting new projects? Sure the data says something about it, can we draw any conclusions from it other than precisely the numbers you listed? Not really, not without more detailed data.

One thing I would welcome is specifically looking at new sysdev projects over time and what languages they're in.

To be clear I'm not taking a view on whether Rust will become the standard. I really don't know, and I've yet to see anything convincing me one way or the other.


Devops and web designer (relatively non-programmer jobs) listings also might list Javascript, Python, and Java, also maybe Ruby for Chef.

And there's a ton of those job listings.

Github repos might be a better source of penetration and use.


Sure, but even low-level languages have tended to reach the ballpark of their all-time peak market penetration -- however high or low it is -- around age 10.


"... around age 10." C++ (not to mention C)? IIRC, I remember seeing B.S.'s "C with Classes" appear in ACM SIGPLAN Notices in the early 1980s -- 40 years ago. I don't think it hit its peak 10 or even 15 years after its early development. Given that it's C++, that's a pretty major exception.


C++ was huge in the 1990s, C++'s share is probably smaller today than back then since so much of programming has moved over to managed and scripting languages.


If C++ hadn't already hit it's all-time peak market share in 1995, it was certainly well inside the ballpark. It's far from an exception.


>programming languages with few exception tend to reach, approach, or at least point toward their all-time peak market penetration around age 10

How are you coming to this conclusion? I can think of more counter examples than actual examples (Almost every language on the top 10 aside from PHP where close to peak market penetration at age 10).


I think the low-level world moves slower.

Key players like Microsoft and the Linux Kernel developers are only just beginning to pick up Rust. It's been considered promising, but immature for much of it's lifetime, and it's only recently gotten to the point that it's considered mature enough for core infrastructure projects (which are really it's forte) in the last couple of years.


I don't think rust is headed to be the norm, but I don't think it should be compared to most mainstream languages. It's not python/php/ruby.. it's a very difficult niche where a lot of efforts failed flat.


> The syntax looks a lot like Rust, though. I'm surprised they made such a break

Convergent evolution. Rusty syntax is pretty close to what you get if you want to make a language look broadly similar to C/C++ while avoiding pathological and/or computationally difficult parsing.


> on its way to become the standard low-level language

It will take a very long time for Rust to get even close to the huge amount of C++ code out there. There are more than 5 million professional C++ developers employed around the world today. And that number is increasing. Don’t get me wrong: I like Rust and other attempts to move beyond C++. But don’t underestimate how much C++ code has been written the last 30+ years. And new C++ projects are started every single day. There are probably more C++ projects started every day than Rust projects. So anything that makes it possible to move beyond C++ while being 100% interoperable is good news.


Carbon is not trying to be 100% interoperable with C++. It's trying for some fuzzy notion of "good enough" - that's really not very different from what Rust is trying to do with cxx-rs. Yes there are serious challenges and I've described them here, but they're not impossible to address while staying with Rust.


> Carbon is not trying to be 100% interoperable with C++ It's trying for some fuzzy notion of "good enough"

No, it's clearly not a "fuzzy" notion of interop even in the most uncharitable interpretation. Your assertion is mis-information.

"Seamless, bidirectional interoperability with C++, such that a library anywhere in an existing C++ stack can adopt Carbon without porting the rest."

Support mixing Carbon and C++ toolchains

Compatibility with the C++ memory model

Minimize bridge code

Unsurprising mappings between C++ and Carbon types

Allow C++ bridge code in Carbon files

Carbon inheritance from C++ types

Support use of advanced C++ features

Support basic C interoperability


And that's still not 100%. The Carbon devs are very clear that there will be some C++ code that Carbon is unable to interoperate with.


Like the standard library - they don't plan to support exceptions.


Oh, that's excellent and interesting news - not a fan of exceptions. I'm not sure how that's going to work with interop when libraries rely on exceptions though. Where did you hear about this? I'd love to know more.


You will probably have to write more c++ code to handle those exceptions inbetween


Rust is like c++ but only the new way to doing c++ is huge rust is like subset of c++ and ownership model. I thing is the right way c++ is harder than rust only because is so bloated. Carbon seams less bloated "fork" of c++ whit only the new way of doing things, but allow you full interoperability whit all c++, instead of partial and i don't see ownership, i thing is taxing mentally (worth or not depends). if i work in c++ i totally will use this.


I don't know about it becoming the standard LL language yet. There's a lot of memory management in low level programming, and peppering everything with unsafe seems like it'd be tedious. I'd rather just write in C to begin with.

I'm interested in seeing how things are handled with the Linux kernel's rust support, if it ever becomes more than a proof of concept. That will be a good viability test.


I'd say the syntax looks like most other modern languages, not just Rust.


Syntax migration is easy. It is the semantic migration that is hard.


>Also, Rust (from my biased point of view) is currently on its way to become the standard low-level language

Lol. Gave me a good chuckle!


looks like Rust mixed with Go


Interestingly enough, Chicken Scheme has a library that allows for interop with C++ to this degree:

http://wiki.call-cc.org/eggref/5/bind#c-notes


I believe Clasp has a pretty good C++ interface as well for Common Lisp.


It looks superior to Chicken's, albeit requiring more initial configuration:

https://clasp-developers.github.io/clbind-doc.html


Well, they are different languages made for different things. Clasp was specifically made because the author had a bunch of C++ code for his research projects, but wanted something more high level/simpler to implement things in. He landed on Common Lisp but none of the implementations had good C++ interop, so he made his own. Can recommend Christian Schafmeister's (creator of clasp) talks on it, it's interesting stuff. In particular his talk on an LLVM Conference.


Isn't this one of the most prominent features in D? Sounds like they're trying for something very much like D, while starting closer to current C++.


D still requires you to write bindings, see https://dlang.org/spec/cpp_interface.html

Also D objects have a different lifetimes & requirements, which complicates things ( see https://dlang.org/spec/cpp_interface.html#lifetime-managemen... )

Carbon appears to be auto-generating bi-directional bindings, and since it has the same memory model has no such awkward interactions between non-GC'd and GC'd worlds like D does.



> Also D objects have a different lifetimes & requirements, which complicates things

Does Carbon actually solve these issues, though? It would be great if it did, but they don't list complete interop w/ C++ as an actual goal of theirs, only enough to make it practically viable for development.


Carbon's memory model is compatible with C++'s[1], so where would you foresee complications? D could have been so much better here if it just hadn't chosen to be a GC'd language. That single choice pretty much kills any easy interop.

1: "For example, C++ and Carbon will use the same memory model." https://github.com/carbon-language/carbon-lang/tree/trunk/do...


AIUI, you don't need to use D with a garbage collector. It can be disabled.


The D standard library is heavily tied to the GC. If you want to avoid the GC, you'd have to give up the standard library, and if you do that, you have to give up most of the D ecosystem.

As far as I can tell, there hasn't been progress in untying the stdlib from the GC - https://github.com/dlang/projects/issues/56


You don't need non-GC stdlib for C++ interop, though.


D is always rebooting what they want to be, I no longer think they still stand a chance to be relevant in the next decade.


D is closer to this than this rust rip-off


Yeah, why not D? It's such a good, mature alternative to C++, albeit not widely adopted.


Like every braces language following C is a C rip-off?

Rusty syntax is objectively superior to almost everything else we've come up with, there's no reason it shouldn't be copied.


> Rusty syntax is objectively superior to almost everything else we've come up with, there's no reason it shouldn't be copied.

You mispelled "subjectively". Rust is a hideous looking language, despite its many other benefits.


If Rusty syntax is objectively superior to almost everything else we've come up with, that almost should make us think before blindly copying it.

So, for those thinking “Rusty syntax is objectively superior to almost everything else we've come up with” (I don’t, if only because I don’t believe the syntax of programming languages can be compared on quality without considering the audience), what syntax do you believe it not objectively superior to?


> is objectively superior

Those objective criteria of superiority being?..


It eliminates the Most Vexing Parse.

Consider this C++ code:

  Foo bar();
A programmer could make simple mistake thinking that is declaring a variable of type Foo. Carbon eliminates this by having explicit keywords for variable and function declaration. (This style is much more Rust based than C++)

  fn bar() -> Foo;
  var bar : Foo;
It makes parsing easier for both the users and maintainers of the language.


There are literally thousands of shipping languages that don't have the most vexing parse problem.

If that is Rust's entire syntax advantage, it isn't a very big one.


> It eliminates the Most Vexing Parse.

C++'s most vexing parse issue ceased to be a concern around a decade ago with the introduction of uniform initialization.


If that's the only criterion then Pascal's syntax, which doesn't have "the most vexing parse" problem either, and for the same reason, is also almost better than anything else.


So by this logic, pretty much every language other than C++ has “objectively best” syntax.


long long

Templates


I would also wonder about D and Odin, neither of which are particularly successful but both of which are also supposed to be kind of like C++ replacements.


Zig is to my knowledge specifically designed to have they kinds of use after free that C and C++ have. Their claim is not to “hide” allocation by requiring everyone to implement manual retain and release.


Zig is at most, as safe as a language like Modula-2.

Definitly safer than C and C++ in regards to bounds checking and numeric conversions, but without any language protection against user after free, other than what you would already get with heap debuggers on C and C++.


Nim is even better at that, as it generates C/C++ code natively and compiles it using GCC.


For once if they would have focused on compile times that would have been a very valid reason to switch.


Minus having to deal with memory management?


Designing a language to interop well with a single language is short sighted in my opinion. I want a clean ffi that I know will be able to iterop with Swift, Typescript, C#, C++, C, python etc... In my current project I'm currently using Rust/protobuff to do all of this and it rarely gets in my way of how I want to do things.


The major usecase for Carbon is to advance Google's internal, multi-billion line, C++ codebase.

As nice as a greenfield language with a clean ffi would be, "extremely close ties to C++" is Carbon's primary benefit.


sorry for nitpicking what is likely intentional exaggeration, but multi billion can’t be anywhere near accurate, right?


There are many billions of lines in Google's monorepo[1], summing well over 80 terrabytes of data. The paper below is from 2016, and the repo has only gotten bigger since then.

Exactly how many of them are C++ is not disclosed, as far as I know. But it is public knowledge that C++ is the biggest of Google's primary languages (C++, Java, Go, Python, Javascript, and so on).

https://research.google/pubs/pub45424/


No, Google really has that much code. Some of it is probably machine generated but it's definitely in the billions of lines.


It's a bit of an exaggeration, but it's close. If you count all Google's C++ code base (google3 + open source) it's fairly close to one billion. If you count regardless of language, I believe it's ~3B.


It's funny you list typescript there given typescript itself is a language designed entirely around interop with a single other language.

Kotlin is another such widely liked language that was designed entirely around interop with a different language.


Kotlin took an entirely different approach, one that I agree with. It can easily interact with dozens of other languages because of it.

Typescript is a superset of javascript and intentionally short sighted to fill a current need.


Many long living languages are those that are source compatible with a single language. C++, Objective-C, TypeScript come to mind.


This language has “zero-cost” with C and C++, “minimal cost” with anything else that is compatible with those ABIs, and “moderate cost” with everything else via protobufs :P


C++ itself was a language designed to interop well with a single language.


> In otherwords, what carbon can do that Rust can't do, is take a C++ class with a `foo` method and call that method. Or create a class with a `foo` method and call that method from C++. Probably one of the biggest hurdles to get over in C++ interopt. Most don't do that, instead you'd make a C function binding and struct and move data/invoke functions through that.

https://cxx.rs


I tried cxx and walked away somewhat disappointed.

One problem is that it's hard to understand the code. The whole thing is one giant proc macro, which is by itself tricky to run and debug. Also it uses techniques like implementing Deref to simulate inheritance which makes it even more confusing.

The main problem is the difficulty in extending it. Support for std::string and std::vector are "baked in" in a way that does not generalize. I tried to add support for std::wstring and it is quite non-trivial.

Because it is a proc macro it's also tricky to integrate into a build system.

To me it seems like Cxx is "purpose built" which is fine. But after working with it, I longed for a Python script that just populates a template string or something.


cxx is great but it isn't anything close to what cogman10 is describing.

It actually can't ever be, because C++ has move constructors and Rust deliberately doesn't, so you can't for example return `std::string` from a Rust function by value.


It's currently unclear if Rust can interop with C++ with high fidelity. For example https://docs.rs/moveit/latest/moveit/ and https://github.com/google/crubit/blob/main/rs_bindings_from_... provide functionality to use non-trivially relocatable C++ types from Rust.


Bit hard to follow but as far as I can tell those are pinning the C++ types and wrapping them in some other object that allows a move constructor to be called on them to move them to some other pinned location.

It still doesn't let you natively move them, and if you have to wrap them in something you may as well just wrap them in `Box<>` and not move them, which is what cxx does.

I guess maybe it makes a difference for passing things into existing C++ interfaces?

Anyway it looks insanely complicated.


It's not immediately obvious from this page and GitHub org, but this is a Google led project. It's led by Chandler and the c++ toolchain team. I have no idea of it's endgoals or how open to non-Google ideas it will be.


> A key example of this is the committee's struggle to converge on a clear set of high-level and long-term goals and priorities aligned with ours [https://wg21.link/p2137].

I was frankly shocked by that goals and priorities document. The non-goals section reads like an open declaration of war against anyone whose use cases for C++ differ from GOOG and NVDA. My interpretation of Carbon is that since GOOG failed to take over the standard in favor of its narrow use cases, that they are building a new language optimized specifically for them.

> I have no idea ... how open to non-Google ideas it will be.

The most-generous attitude to take is that it will be managed similarly to Go. If your use cases and priorities are well-aligned with theirs, then feel free to use it. But while they may listen to third-party feedback, it will be their own use cases and opinions which dominate the language's development.


(one of the Carbon leads)

Success for the Carbon Language requires it to successfully be an independent and community driven project. We may not succeed (this really is an experiment), but we're working hard to engage broadly and early in large part because of this being such an important goal and priority for us.

Projects like this have to start somewhere, but can grow and become community endeavors. We are also already seeing strong interest from other companies and organizations in participating in this experiment.


The README doesn't expand on what is probably the most challenging problem: how do you achieve effortless C++ interop without burdening Carbon with all the odd behaviour and memory safety / UB issues prevalent in C++?

In Rust for example, `unsafe {}` blocks are not just "local unsafety". They can freely operate on all memory, so they are infectious and are essentially a marker for "dangerous code below, be extra careful and audit lots".

But if all code can freely interoperate with C++, how do you improve upon C++, apart from relatively isolated features like a better generics system?

To what extend can a Carbon compiler that is deeply aware of C++ semantics mitigate the pitfalls?


C++ has many many issues other than memory safety. For example implicit type conversion, template duck typing, the include system, the very very very complicated name lookup system, a gazillion footguns inherited from C.

You can definitely massively improve upon C++ without touching its actual computation model.


Starting by using a sane set of compiler flags to make code safer by default.


Tangent. Absolutely loved your CPPCON talks. Thanks.


while we have you here, may I ask why is it called Carbon? It is because the symbol for atomic carbon is C ? :)


To make it as hard to search for solutions online as it is for "go".


Shouldn't be a problem if they're both created by a search engine company.


What's a good name?

Power C++ Syntax Plus Edition

C&₹π÷×√


Will Carbon natively support exceptions or is it solely going the Rust `Result<T, E>` route ?


This suggest that Carbon itself won't support exceptions: https://github.com/carbon-language/carbon-lang/blob/trunk/do...


> GOOG and NVDA

Why use these stock index abbrevs (or whatever they are) in this context here? GEEZ!

To the topic, it sounds a bit grumpy. If we look at languages and how many evolve... many suffer the phenomenon that they almost all are Turing complete, and try to gain concise (or simple understandable) expressiveness somehow, and then they try to not break compatibility too much to varying degrees - net result: they grow and grow where at one point they feel like too big, too much legacy dragged around (C++), the "one obvious way" lost (Python) when they cater for use case after use case.

Limiting can be good in that regard. So having key goals defined and not to cater to every small new usecase by someone is a valid attempt to not let this happen, so while I dislike Googles power, I wouldn't feel to bad with attempting this by anyone on their fresh language?!


It's an implication that the driving force for the development is money. By the way, jump on the GEEZ train, that stock's gonna go through the roof!


Bits are in such short supply these days that it's necessary to save 32 of them. Think of all the things you could be doing with those 32 bits!


Must be another one of those supply chain disruptions.


Oh so that's what they are.


I really don't see that. At the very start they say

> That said, our experience, use cases, and needs are clearly not those of every user. We aren’t pushing to directly build consensus on these points. Rather, this is presented as a vehicle to advertise our needs from C++ as a high-performance systems language.

The point of the non-goals is simply stating things they don't care about, which is pretty reasonable. After all, it's easy to say what you want, but what are you willing to give up for it?


I don't necessarily agree with all their design goals, but it is unfortunately true that getting a large coherent change through the C++ committe is an herculean task. The language ends up doing a random walk via small steps through the design space without a coherent long term vision, because nothing else is feasible.

Implementing a coherent vision might lead to a better language even if most stakeholders might disagree with every single change.


There are weird governance issues with the C++ committee too. Acceptance or rejection of a feature can depend on which reps just happen to show up that day.


Sounds a bit like "C++ but we get to change the ABI, and maybe break some old code for the sake of better-enforced safety guidelines".


C++ also has a very hard-line "you don't pay for what you don't use" philosophy, which sometimes lead to standard library APIs or language semantics which are a bit tortured.

Compare C++ <random> or <chrono> against, say, the equivalent functionality in Rust, Go, Java, C#, etc. C++'s APIs are a bit overcomplicated, or at least they look that way if you don't know the various reasons why the C++ standard defined them that way (reasons which are probably not relevant to your use cases).


"C++ also has a very hard-line "you don't pay for what you don't use"

This always felt eye-rolling false to me. We can't have a better hashmap since we all need to pay for the std::unordered_map's un-needed features like bucket access. We are certainly paying for what we don't use.


"You don't pay for what you don't use" means the compiler doesn't generate worse code than if you had written the same by hand.

Every other adoption of the meaning is misuse of what Bjarne originally meant with it.


"You don't pay for what you don't use" does not mean "the compiler doesn't generate worse code than if you had written the same by hand".

These 2 things are C++ core principle, but they are 2 different things.

"You don't pay for what you don't use" means that I you do not use a C++ feature, your runtime performance won't be affected by this feature. For example, non virtual functions are not slower because virtual functions exist.

In our case here, if you do not use "std::unordered_map" and decide to implement your own unordered map, then it is as-if "std::unordered_map" never existed. You are not forced to use "std::unordered_map" and you own map won't be slower because it exists.

"the compiler doesn't generate worse code than if you had written the same by hand", or rather "What you do use, you couldn’t hand code any better", means that if you decided to implement a C++ feature by hand in C++ or C, your implementation could only hope to match C++ implementation. For instance calling a virtual function in C++ will never be slower than a similar handwritten late dispatch implementation.


> this is a Google led project

This doesn't give me much confidence if its Corporate governance rather than open governance.


Based on what they've told me the long term governance plan is to establish an independent foundation, with development controlled by three equal lead developers from different companies.

It may have been started by mostly Googlers but they want other companies and individuals to participate


Official governance vs the effective governance is the real crux of this issue. Immediately starting from the lead developers separated by companies already puts a deep corporate interest spin on the project. Choosing to do a lot of the initial work in secret and then disclosing is also a pretty big data point.

I think this is going to be an uphill battle for them and I hope they win it but I'll be skeptical unless if I start seeing radical (for google) transparency basically immediately.

Also worth pointing out that the language is not immediately worthless even if they fail or only partially succeed in this endeavor!!


This is an odd line to take considering the language spec is unfinished and there’s no implementation. Anything less than or earlier than this phase of initial work would be akin to “let’s make a new C++; anyone have ideas?”

Also, for what it’s worth: despite being an ISO-standard language, C++ is still heavily swayed by corporate interests, with most committee members being tied to BigCos. This has the effect of somewhat-necessarily aligning language progress with its most significant users. Without this alignment, the language might be “better” in some respects, but less useful.


The governance of the project is not immediately clear to me, and I have to assume that the Google team is working in good faith. That team cares a lot about C++ and its future. They are obviously aware of this stigma. I also think that Google is pretty bad at open source governance though :/

Disclosure: Former Google engineer who worked sorta adjacent to some of those people.



Huh looking at that page and saw Kate Gregory is involved. She's someone who, as a mostly outsider who dabbles in C++ occasionally, seemed incredibly dedicated to seeing C++ thrive and people come to understand the language. Her involvement here seems like a huge positive to me, but maybe that's coming from a place of ignorance.


Hahaha, I really like the "painter" terminology. A+


Nor does it convey any confidence that Google will support it.


Can we not with this tired joke. Yes we're talking about the company with hundreds of chat apps, but we're also talking about the 100,000+ employee company that developed and support Golang, Dart and Flutter. But ignoring all this, the GitHub page is pretty clear: "Carbon is currently an experimental project." You shouldn't have confidence that it will be supported but they are pretty clear about it, they're trying to find the right fit.


>> Can we not with this tired joke. . . . You shouldn't have confidence that it will be supported . . .

It is not a joke.

It is a reputation that Google has earned through its actions and inactions.

As shown by https://killedbygoogle.com/ and numerous desperate posts for help [1][2][3] on this and other web sites, Google's "must launch a new shiny thing" promotion culture and abysmal customer service have eroded public trust in the long-term viability of anything that Google creates.

[1] https://news.ycombinator.com/item?id=5523992

[2] https://news.ycombinator.com/item?id=13145927

[3] https://news.ycombinator.com/item?id=31837795


The only comparable thing to an open source programming language on that list is AngularJS, and I wouldn't say that was killed but superceded by Angular 2 (which is also a lot better).


there was GWT.


Looks like the GWT community just released a new version in June.


I wasn't joking.

But yes, I agree, we should have no expectation of support for experimental or beta products.


Looking at other languages Google developed and continued supporting, this comment is a bit unfair.


So you had no problem with Golang being led by Google for more than 10 years and now this new language is somehow a problem because that one is also led by Google?


This is assuming a lot about me when in fact

- I don't use golang - As an outsider, the governance has seemed unhealthy like with how dependency management was dropped out of no where

However, it has been relatively successful. Dart's success has been more mixed. I am also looking more broadly at projects like Bazel.


Go is governed very well for all practical purposes. If you disagree with their opinions on language design (I do, too) that's another thing.


The primary goal is clear from the P2137 document: Performance. There are optimizations both in the standard library and the language that the C++ committee will either adopt too slowly for Google’s needs or not at all.


No good can result from the interaction between the most complex and powerful programming language and the corporation who created the programming language for dummies.


They also created Dart, which is a joy to write.


A corporation is not a monolith.


From the end of the safety document:

> Overall, Carbon is making a compromise around safety in order to give a path for C++ to evolve. C++ developers must be comfortable migrating their codebases, and able to do so in a largely automated manner. In order to achieve automated migration, Carbon cannot require fundamental redesigns of migrated C++ code. While a migration tool could in theory mark all migrated code as unsafe, Carbon should use a safety strategy that degrades gracefully and offers improvements for C++ code, whether migrated or not.

> That does not mean Carbon will never adopt guaranteed safety by default, only that performance and migration of C++ code takes priority, and any design will need to be considered in the context of other goals. It should still be possible to adopt guaranteed safety later, although it will require identifying a migration path.

That's very interesting and pragmatic. It would be interesting if they can eventually come up with the same level of safety guarantees via a different path than Rust's borrow checker.

-------------

Since one of their goals is to automatically translate modern C++ to Carbon, I do wonder how well that is going to work in general.

I definitely welcome an alternative to C++ that would be easier to read and understand. That would be a benefit to the world.


This is also how Meta got Hack.

It's not like PHP is unsafe to begin with like C++, but the language does have a ton of problems, and Meta's massive codebase could only be migrated to another language gradually. Hence, Hack. Better language, better tooling, more productive programmers.

Note that unlike Carbon/C++, Hack is backwards-compatible with PHP. So the migration is somewhat more gradual.


The borrow checker is a pretty smart solution pushing checks to compile time and have efficient runtime, so yes, it would be interesting to see if there is an even better alternative. But you can't beat that kind of safety into an old C++ code base, so I am pessimistic for retrofitting.


I wonder what would happen if Security became a compiler flag?

For instance, just like the -O1 or -O3 flags work for optimization, something like a -S1 or -S3 would be really useful.

To me, there are lots of times when I just need to get an idea into code. Then there are times when I need to make sure that code just works™.

Having different compiler flags would really make that nice, and for devops, allow anything pushed to production have to complete a -S3 successfully first.


You already have them, true it is more than one and isn't bullet proof, still it is way better than not using them.


Sorry, I wasn’t as clear as I meant to be. I was specifically thinking about things like Rust lifetimes being a compiler warning level instead of an absolute.


You mean that you want your compiler to say "I know that this will sometimes fail, but I'll let you burn yourself unless you enable a flag"? Should it also have a flag to say "Told you!" when it happens? xD


(Speaking only for myself, I just watch from the sidelines and have no involvement)

This seems like Google’s response to

1. Rust not being sufficiently “Go-like” (in the sense of the “The key point here is our programmers are Googlers, they're not researchers” quote) where C++ lets a bunch of people who are not really experts in the language write footguns that Google has to deal with when they cause problems at scale. They want a dumbed-down language (some would use words like approachable/safer/whatever here) for them so nobody will send in CLs with ”clever” code that causes headaches later. I guess they’ll need to have generics but I’m guessing that choices will be made to avoid introducing advanced type theory, functional programming, etc into the language.

2. Google uses C++ differently than everyone else who uses C++. In particular, they have a lot of statically linked code from a monorepo that gets recompiled all the time. This has caused them to put up proposals to “fix” the language that nobody else will support and don’t get adopted, because they break the ABI or backwards compatibility in ways that are unacceptable to the others who participate. It seems like this language is the result of that frustration and subsequent soft-withdrawal from the C++ WG.

I haven’t looked at the design much yet beyond just the simple examples and I think it mostly looks reasonable, but I feel like Google designed this to solve their problems with C++ and is just throwing it out there if people are willing to adopt it because it’s there and Google says it’s good, just like how Go gained traction. Sometimes Google does make good things :) Some Go developers would say that about Go, I’m sure. But if you’re using this, it seems pretty clear that the needs of this will be driven by Google, and the above two points are probably things you should keep in mind as you watch it evolve.


My guess is that it's pretty much all about 2. They say explicitly in their FAQ that you should be using Rust, or any other well-established language, if not constrained by the need for deep, complex interop with C++ code bases. They're not positioning this as a "more approachable" alternative to Rust, only as one that sticks closer to existing C/C++ coding patterns.


To be fair, for many many many people there is a requirement for deep, complex interop with C++. Even if you're getting started on a new project being able to easily leverage the C++ ecosystem could make or break it.


Hence why I love C++/CLI, way easier than getting P/Invoke attributes right.


(1) does not seem true, they have documented some seemingly detailed reasons why Rust is not always the right choice, and acknowledge that in many cases it is: https://github.com/carbon-language/carbon-lang/blob/trunk/do...


Google can have many reasons why they do something. Not all of them will always be listed publicly, or even be written down :)


True for sure. This made me laugh. Lol.


> Seamless interop where existing, unmodified C++ APIs are made callable from safe Rust requires the C++ code to follow borrow checking rules at the API boundary.

> Seamless interop where safe Rust APIs are made callable from C++ requires C++ users to follow Rust borrow checking rules.

Their complaints about borrow checking rules at the interop layer ring hollow to me. Whether the new code is being written in Rust, Carbon, or C++... the lifetime of shared memory must be well-understood by the developer writing the code. Rust just makes this problem explicit and enforceable. Sweeping the problem under the rug isn't a better option.

> However Rust imposes stricter rules than C++, disallowing some design choices that were valid in C++.

The word "valid" is doing a lot of heavy lifting here, and I'm generally skeptical of these "valid" architectures. If it is so easy to know that these architectures are valid, why do we still see so many memory safety issues in Google's C++ code? Incrementally restructuring C++ code to be more provably correct doesn't sound like a terrible thing... in fact, it sounds like exactly what they should be doing.

> However, we are not certain that [C++ can be migrated to Rust incrementally]

Firefox is a large, historically C++ codebase that has undergone incremental rewrite into Rust for years now. It seems quite certain that this is both possible and practical! Where is the uncertainty? Of course, Mozilla's budget pales in comparison to Google's.

Rust is an extremely extensible language, and it is absolutely possible for Google to build their own version of `rust-bindgen` that suits their particular C++ codebase's idioms. That would be a much simpler undertaking that achieves their goal of incrementally adopting memory safety.

I hate to disparage new languages, but this feels like Swift all over again... it's just NIH. At this point, the die has been cast, and I'm sure it would be career suicide in Google for anyone behind the Carbon project to admit at this stage that "actually, the project turned out to be unnecessary after a discussion on HN!", so I don't expect to change anyone's mind. I'm just wasting my breath making obvious counterarguments that I'm sure have already been considered and ignored.

To be extra clear, I would love to see a company like Google take on the challenge of building an ergonomic language that exceeds Rust in terms of safety, but Carbon is a half-measure that doesn't pretend otherwise. If all of Google's C++ code is magically rewritten in Carbon, they will still apparently have memory safety problems based on what Carbon's README says... and then it'll be time once again to consider "maybe we should have ported to Rust after all". It just feels like such a waste.


> Their complaints about borrow checking rules at the interop layer ring hollow to me.

It's actually a big problem. There's a whole lot of Rust library API's that are only provided with an idiomatic "safe" interface, but this actually imposes stronger, more demanding preconditions on those API calls than are warranted by the actual code, which could easily work with e.g. raw (possibly aliased) pointers, or owned-but-pinned (non-movable) data. This creates unneeded pitfalls in Rust-C/C++ interop. The counterargument is that future versions of that library code might benefit from those stronger preconditions, but that's more of a theoretical point, it just doesn't apply in most cases.


I think a number of companies have proven that it isn't a "big problem" by successfully interoping Rust and C++. What you describe might be an inconvenience, but this goes back to what I was saying about how Google could achieve their outcome with less effort by customizing their own version of bindgen. Google could give up some safety in exchange for the desired, incremental outcomes.

If they needed to fork the Rust compiler to pessimize a few optimizations that the Rust compiler would normally do (that would invalidate such unsafe C++ code with additional UB), even that would be far less effort than inventing their own entire new language and ecosystem from scratch. That is without considering how Google could work with the Rust Core Team to contribute solutions for ergonomic issues they're encountering in the interop process.

When the alternative is "building an entire, general purpose programming language", there are a lot of other things you can do that would be much easier.

And, as previously mentioned, Carbon is only addressing the low hanging fruit... it isn't stated anywhere I've seen that they even intend to achieve parity with Rust. All of this effort, just for a lesser outcome.

I'm sorry that I'm frustrated at big companies choosing to take half measures that seem even more expensive than the full measure. I have used Rust professionally for years; I'm clearly biased, but it's not that hard. The level of safety that Rust provides should be table stakes in discussions of unmanaged languages... not the ceiling for what we can possibly imagine, but Carbon is a statement that Rust's bar is too high for Google to reach, and that's pretty depressing.


I'm not talking about giving up safety, I'm talking about library API's that are simply not usable by code that involves, e.g. possibly aliased pointers, or pinned data, other than by invoking possible UB, merely because those things are "unidiomatic" in Rust (even though, if the unsafety is properly contained, it's quite possible to e.g. write proofs of correctness for such code, or verify it via a model checker). It's a recurring topic on the Rust Internals forum.

"Pessimizing" the compiler is not a solution, UB is still UB. You'd have to actually look at what the code does in detail and provide a generalized interface to it, possibly needing to introduce new "unsafe" blocks in the process and prove their safety.


> other than by invoking possible UB

I addressed this when I talked about a compiler fork. Google could maintain a patchset against the Rust compiler that makes this behavior well-defined in a way that suits them during the transition period as C++ code is rewritten. Or they could work with the Rust Core Team to address this issue in a way that is favorable to them over the next year or two. Either of these options are much less work than developing an entirely new language and ecosystem.

As far as I can tell, Google tried neither option.

Still, Mozilla and others have gotten along just fine as it is.


> Rust compiler that makes this behavior well-defined in a way that suits them during the transition period as C++ code is rewritten

To be clear, this is "forever". I don't think anyone involved has an expectation to truly get rid of Google's existing C++ codebase ever. So the question is, do you really want Google to fork rustc to implement some google specific UB handling? Is that good or fair to the rust community?

Even if it was, the HN comments would be along the line of "Google is using an embrace/extend/extinguish approach to force rust to do certain things bypassing the normal rust language processes", and they'd in some ways be correct! Could rust really choose different semantics from the already existing one, or would google-rust implicitly be a standard that rust would need to support?

Would that be a good idea for anyone?

> Still, Mozilla and others have gotten along just fine as it is.

IIUC, one of the driving factors behind this is that vanilla C++ is too slow, and ABI compatibility prevents up-streaming certain performance improvements. So what works for Mozilla probably doesn't work here, because what works for Mozilla has a performance penalty. Sure it's small, but that's going in the wrong direction.


> To be clear, this is "forever"

I guess I didn't finish my thought in that comment, because it isn't forever... it's only until either the C++ is gone, or the Rust language has developed an equally acceptable alternative (which could be upstreamed by Google, or come from somewhere else).

And realistically, the C++ would never have to go away entirely for this UB handling fork to become unnecessary, depending on the driving factors. Some code paths are more performance sensitive than others, and these tend to represent a small portion of the overall code. Once the performance sensitive code paths are rewritten, it matters less whether there is a small performance penalty for the remaining C++ code that is less sensitive to performance. But, at Google Scale, the cost of the developer-hours to maintain that fork would probably be less than the cost of the small increase in CPU hours that removing the fork would result in, so... yeah, maybe forever or until there is an upstream solution. Maintaining a patchset like this would still cost way less than maintaining an entire compiler and all the language tooling needed to support that language... does anyone really disagree? A new language is far more than just a compiler, let alone a patch for a compiler.

> Would that be a good idea for anyone?

IMHO, it definitely seems better than Carbon.


But the way the Rust compiler works today is quite acceptable already. This isn't a Rust-the-language problem other than in secondary ways that are hard to address by definition (such as "movable" data being the default and Pin<> references being special), it's a Rust-the-library-ecosystem (including, but not limited to, std and/or core) issue. It can only be addressed as such.


> I hate to disparage new languages, but this feels like Swift all over again... it's just NIH. At this point, the die has been cast, and I'm sure it would be career suicide in Google for anyone behind the Carbon project to admit at this stage that "actually, the project turned out to be unnecessary after a discussion on HN!"

I doubt that Chandler would lose his job if this project is abandoned. In fact, I'd say chances it is abandoned (at least, mothballed, never to be revived) are significant so if Chandler's job does hang on this work that was a bad idea. It's an experiment, I personally think it's looking in the wrong place, but of course the point of experiments is that you don't know what the results will be. I doubt that Chandler has labelled this an experiment without knowing that.

So, my expectation is that probably Carbon won't even complete design, but if Chandler leaves Google that won't be why.


I work at smaller scale c++ than google and I found most of their proposals reasonable but undoable for practical reasons rather than first principles reasons. For instance compiler vendors who have a veto power over language changes strongly prefer not breaking ABI. ABI changes are fine if you build the world at every commit, but also if you don't have legacy. C++ users 200 years from now would most likely benefit from the things Google wishes it could do in the language but can't


> ABI changes are fine if you build the world at every commit, but also if you don't have legacy.

That's really not true and we can look at other languages to see the various amounts of un-true it is.

If ABI constantly changes then what you said is possibly true, but nobody is actually proposing that. The idea instead would be breakage along a std version. Like C++23 would be an ABI break. In the same way Java has had various ABI breaks over the years. You'd be blocked on taking the new ABI until all your dependencies have released updates, sure, but this isn't unheard of or unsolvable.

It's also pretty much what you expect from any semvar library, too. API/ABI breaks are quite common outside of monorepos, after all. As long as it's appropriately documented (such as with a major version bump) and there's appropriate dependency tracking, the world can handle it fine.

The C++ committee seems to be extremely shy from going this route, though. Even though C++11 triggered some ABI breaks ( https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_a... ) and everyone seems to have gotten along with that just fine.


They haven't gotten along with that just fine, that GCC case is exactly what gets brought up when discussing ABI breaks, because it caused a Python 2/3 like experience around the GCC and GNU/Linux communities.


And yet everyone is successfully on C++11 & newer, and that transition still happened in a fraction of the time of the Python 2/3 transition. Nobody even mentions that when discussing modern C++, it's that much of an "everyone got over it" thing.


Apparently you aren't much into the C++ world, I advise hearing the C++ podcasts at very least.

Plenty of people discuss the matter.


ABI with a clear major version number would solve this. It's what C++11 should have done with CoW strings.

You just ship 2 version compiled targets of std libs


It seems that the goal is a new language with full interop with C++, kind of like Zig's interop with C.

- I wonder how much baggage had to be maintained for interop. If some of it could be isolated, how did they do it?

- Does this fall into any of the traps that D initially did where it allowed C interop but was, by default, more limited in what C environments it could run in or is this as flexible as C++ for environments?

As for the language itself, I've not had a chance to dig into it too much but I am sad to see that it uses explicit local inference by replacing the type name with `auto` rather than eliding the type completely [0]. While it has its pains at times, this is something I've come to enjoy in Rust.

I also didn't see mention of tooling. Having out-of-the-box build, test, and code formatting would be a big help for establishing community standards / practices, even if the build/test tool might get limited when having to do C++ interop. At least for pure-Carbon libraries it would be a big help!

> Once we can migrate code into Carbon, we will have a simplified language with room in the design space to add any necessary annotations or features, and infrastructure like generics to support safer design patterns. Longer term, we will build on this to introduce a safe Carbon subset. This will be a large and complex undertaking, and won't be in the 0.1 design. Meanwhile, we are closely watching and learning from efforts to add memory safe semantics onto C++ such as Rust-inspired lifetime annotations.

I'm a bit skeptical if they push off lifetime annotations too far.

[0] https://github.com/carbon-language/carbon-lang/blob/trunk/do...


Looks like this experiment has been going on for a couple of years, a bunch of familiar (internet-familiar, have met Chandler a couple of times but do not know any personally) names were secretly working on it, and it was only made public a few days ago:

https://github.com/carbon-language/carbon-lang/pull/1363


I was interested and intrigued, but then:

     // A dynamically sized array, like `std::vector`.
     var circles: Array(Circle) = ({.r = 1.0}, {.r = 2.0});
Yegads, they've put effort into ensuring API and ABI compatibility with C++, but they've gone and decided to change the meaning of nouns for basic types?!

Why, just why would introducing that obvious footgun be appealing? It raises the concern that the language is full of other arbitrary choices hiding dangerous footguns.

Edit:

Strangely, their test suite makes it look like Array is bounds checked and fixed size; there is no test for resizing:

https://github.com/carbon-language/carbon-lang/tree/trunk/ex...


It's std::vector that was weirdly named. In plenty of codebases "Vector", particularly gamedev and scientific, will mean the mathematical object with that name.

Other languages don't need to replicate this mistake.


True, though in OP's defense I don't know if I've ever seen a language refer to a growing list of entries as an "array".


Doesn't seem particularly uncommon. Perl, JavaScript, and Ruby come to mind if you're really looking for languages that use just a raw term "array" with no extra qualifiers.


Object Pascal's arrays are dynamic.

var kek: array of int;

setlength(kek, 666); // kek is now int[666]

setlength(kek, 1337); // kek is now int[1337]


Java: ArrayList

Zig: ArrayList

GLib: GArray

Objective-C: NSMutableArray


FWIW, ArrayList is a List, backed by Arrays.

So List would have been the more Java/Zig way to name it (also Python, etc.)


> FWIW, ArrayList is a List, backed by Arrays.

You probably know that, but just to clarify: it's backed by a single array, reallocated repeatedly, just like std::vector in C++ (although growth factors are different, I think).

Just "List" probably risks that some people will jump to the conclusion that it's a linked list. I'd probably prefer the full "ArrayList". Although personally I'd use something like "DynArray"/"DynamicArray".


These are examples in favor of my point, not against it.


Well, the data structure is called a dynamic array


I don't see any reference to it in the GH readme, but I do wonder what they call actual arrays (as in, fixed size allocations of memory that store a contiguously typed value).


I was talking about std::vector (and Rust's Vec). Those are dynamic arrays (as in, growable allocations of memory that store a contiguously typed value).


An std::vector is a vector in the mathematical sense, though. It's a homogeneous tuple. It's just that using an std::vector of std::vectors to store a list of points would be inefficient.


It's not. Vectors can be added together and multiplied by scalars. That they are often represented as tuples of coefficients is just notation, doesn't matter for the notion of vectors, and vice versa: a tuple is not necessarily a vector.

1. std::vector doesn't have a fixed dimensionality, as would a mathematical vector. A fixed-length array actually makes more sense as a vector.

2. It doesn't provide the operations of addition and multiplication by scalars out-of-the-box (though you can whip up your own). Moreover, in general those operations wouldn't make sense for the elements which can be stored in a std::vector. E.g. neither multiplying bank account numbers by scalars, nor adding two bank account numbers make any sense. It would be good if you modelled them with types which don't allow those operations. Yet storing them in a std::vector makes perfect sense. But std::vector (or tuple) of bank account numbers is not a vector.


Worse, they have been calling the location of a pointer to a subroutine that handles an interrupt a "vector" - the word which was originally used to refer to the entire set of such locations. (Curiously, this usage is similar to how the word "sector" is used to refer to a physical record on a disk - simply because it usually comes last in the coordinate triple (cylinder, track, sector). This would be like calling a point in the 3-dimensional space a "Z".)


1. That doesn't make it not-a-vector. It just means there isn't a one-to-one correspondence between std::vector<T> and T^n. However, a particular instance x of std::vector<T> is still a T^x.size().

2. Operations are not intrinsic to a set, but to an algebra. Why would there need to be any correspondence between the operations of linear algebra and those of banking algebra in order to call an std::vector a vector?


2. I'm not sure I understand you here. A vector is defined by those operations. A set without those operations, even if it has the same elements, is not a set of vectors anymore. I don't understand your remark about sets here. The correspondence is necessary, because those operations on vectors are induced from the operations of the relevant field of scalars. The operation of adding vectors of bank numbers would be induced from the operation of adding bank numbers, if there was any such thing. The operation of multiplying vectors by scalars (bank numbers in this case) would need to be induced from the operation of multiplication of bank numbers, if there was any such thing. Anyway, I think you got hung up on the example, when the point is you can have std::vectors of types for which those operations don't make any sense. Now, imagining that perhaps there hypothetically might exist some definitions of those operations, just to convince oneself that std::vector is a vector is really stretching it, and I'm not even sure at this point if you're not trolling.

Anyway, as another commenter pointed out, Stepanov himself, who gave this container its name, said that it has nothing to do with vectors, and he wouldn't name it vector, if he could correct this mistake.


A vector is defined as I did above. It's a homogeneous tuple, i.e. the Cartesian product of n sets, where all sets are the same. Vector addition plus scalar multiplication are not part of the set. Usually they're part of the definition of a vector space. E.g. (R^2, +, ⋅). But you can construct a vector space out of a set of non-vectors (such as matrices), or out of operations other than vector addition and scalar multiplication, and you can use vectors for purposes other than constructing vector spaces, also without involving either of those operations. For example, to store sequential information.


Then you're using a definition of vector, not used in any mathematics course.

> you can construct a vector space out of a set of non-vectors (such as matrices)

A vector is by definition no more and no less than an element of a vector space. Vectors are defined by vector spaces, not the other way around.

If you have a vector space whose elements are matrices, then those matrices are vectors. And they will be written in coefficients in a given base as tuples.

> out of operations other than vector addition and scalar multiplication

You don't "build vectors out of operations like vector addition and scalar multiplication", as in: you don't choose them. You choose the field and dimension, and those operations (vector addition and scalar multiplication) are a consequence.

> you can use vectors for purposes other than constructing vector spaces, also without involving either of those operations

Again, you don't construct vector spaces out of vectors - there are no vectors without vector spaces. And there are no vector spaces without those operations. But yes, you can use vectors from a given vector space in a greater capacity than just as vectors.

An example, which shows the futility of looking at vectors as just tuples: a real number is a vector in the vector space of real numbers over the field of rational numbers.


>Then you're using a definition of vector, not used in any mathematics course. [...] A vector is by definition no more and no less than an element of a vector space.

It's the definition I was given. If you accept that the word "vector" may be given a definition different from the one you give it, you'll need to concede that an std::vector may be a vector.

>you don't choose them. You choose the field and dimension, and those operations (vector addition and scalar multiplication) are a consequence.

You're using the phrase "vector addition and scalar multiplication" in a different sense than I meant. I was referring to component-wise addition between two vectors and to multiplication between a scalar and the individual components of a vector. You could choose different operations to construct a vector space, as long as they meet the requirements of vector spaces.

You used the phrase to refer to the constructing operations of a vector space. So yes, a vector space is indeed constructed out of the operations it is constructed out of. You could have been more charitable in your interpretation of my words, rather than assume I was saying something equivalent to "four-cornered triangle".


A vector is definitely not defined as you did above.

A vector is an element of a vector space, basically anything that you can add together and multiply with an element of a field. It has nothing to do with what you wrote above.

[] : https://en.wikipedia.org/wiki/Vector_(mathematics_and_physic...


Ironically, C++ does have a standard container that can be added together and multiplied by scalars... and it's called an array; std::valarray, to be specific.


No it's not and even Stepanov, the person who gave it its name, has admitted it was a mistake:

Link to lecture by Stepanov: https://www.youtube.com/watch?v=etZgaSjzqlU

Furthermore in his book "From Mathematics to Generic Programming", Stepanov says that if he could change its name, he'd have named it "array".


Indeed, "vector" appears to be relatively recent terminology, it has been "array" since the dawn of programming. But it's too late now, and we already have the std::array anyway...


>An std::vector is a vector in the mathematical sense

No, because you can't prove that std::vector<T> obeys all vector space axioms[1] for all T, which is good because it's impossible, I can trivially define a T that will break any number of axioms and the cpp compiler will happily let me instantiate std::vector on it.

[1] https://www.math.ucla.edu/~tao/resource/general/121.1.00s/ve...


A vector can't change its size or contain things that aren't elements of a field. A std::vector can.


Yes, you can have a vector of, say, naturals. I'm pretty sure you can even create a vector field out of vectors of non-reals.


A "vector field" and a "field" are two completely different notions. (And yes, you can build a field from integers.)

https://en.wikipedia.org/wiki/Finite_field


My bad. I meant to say "vector space", not "vector field". I understand the difference, but I often make this mistake because the set of a vector space is almost always the set of an algebraic field (i.e. the reals).


Really? What is the additive inverse of (1, 2, 3) ∈ ℕ³ ?


It depends on how you define addition.


I guess you could make it a vector space through a bijection with the rational numbers, but something tells me C++ programmers don't have that in mind when writing std::vector<unsigned>. And it doesn't solve the problem that C++ “vectors” are resizable.


See my comment above on the subject.


(-1, -2, -3), is it not?


That's not a vector of natural numbers.


Ah, missed the N... But more interestingly, the non-negative integers 0...p-1 modulo a prime number p form a field (where -1, -2 etc. appear in a natural way), and so their tuples of some fixed length do form a vector space.


Java's Vector looks around awkwardly, hoping everyone forgets it existed... ;)


With the new versions of the JDK I think they will.

The Java Vector features will be out of incubator in the next year or so and unlike the existing Vector, it's actually useful. For context the new vector features are for SIMD support on the JVM.


I'm honestly surprised java.util.Vector still exists. It was "deprecated" in 1.2 or something like that? I guess there's no pressure to actually remove it, though.

I only know about it from dealing with J2ME crap where ArrayList wasn't available.


Java doesn't remove things from the standard library. AWT is still around, too.


Actually, I've never known it existed. You're the one who reminded me now. :)


And yet it is the name used in C++ code for a variable length container of contiguously stored data.


This is intended to be a different language, is it not?


Not wholly different; it appears to be intended to interop with, and progressively replace C++ in existing projects.


Why is it an obvious footgun?


A developer familiar with C++ may believe it to be a fixed length container (perhaps with automatic bounds checking[0]) and treat it as such in memory/security/performance critical sections.

0: https://en.cppreference.com/w/cpp/container/array/at


And if they named it “vector”, a developer familiar with mathematics might believe it to be a vector.


The language is designed to replace, by progression, C++ for C++ developers.

Mathematicians are probably better suited to using R, Julia, Octave, or Wolfram.


Which is not a footgun.


It actually can be, mathematical vectors gaurantee for example that x+y == y+x for all x,y, but no such gaurantee can be ever made on the user-defined operator+, indeed you can't even gaurantee that it is defined to begin with, let alone in the manner prescribed by linear algebra.


Why use Rust syntax (fn, x:Type, ...)? Syntax is one thing that is not so well-designed in Rust (in my opinion). Also, with the stated goals, it seems a bit unnecessary to overhaul C++ syntax, but then I found no explanation why syntax was changed. So what's wrong with C++ syntax if your goal is a successor of C++?

This now looks to me like a Rust-- instead of a C++++, which is a picture they might not want to give rise to. Because then I'd rather use Rust instead, which then feels like the real thing(tm).

[EDIT: If you think about downvoting, maybe answer instead as I am genuinely interested in this syntax question. I am not trying to be negative, it was just an observation and a question.]


It's not just Rust syntax. `name: Type` is the syntax used in TypeScript and Python type annotations (also Ocaml, which is probably where Rust got it from). Golang drops the colon, but still keeps the name first.

As for what's wrong with `Type name(constructor, args)`? A lot of tooling wants to be able to parse "mostly-valid C++", like IDEs and compiler diagnostics. Sure, once clang's type inference is finished, the lexer hack and most vexing parse aren't problems, but when the program isn't complete, parsing isolated fragments is impossible, and that limits the amount of useful tooling the language can have.


The syntax `name: Type` is also friendlier to type inference as you generally have a token indicating a declaration. If you have `var x: Type = …` then you can just omit the type and let inference do its job.

Even better, when you start having more complex patterns on the left-hand side of `=`, you can type annotate them as you want. Hypothetical syntax would be:

    var (x: f32, y, [z1,z2,z3]) = SomeExpression(); 
That's harder to do when you have a type declaration on the left imho.


On the other side `name:Type` doesn't allow an IDE to suggest a name based on the type, because you type the name first. Also setting values looks pretty confusing. I claim writing the type before the name is much more readable. Compare

    five: integer = 5
or

    integer five = 5


> I claim writing the type before the name is much more readable. Compare

I claim writing the name first is much more readable. Compare

    integer five = 5
or

    five: integer = 5


It would make sense if Carbon started allowing omitting ": Type", which they currently don't: https://github.com/carbon-language/carbon-lang/blob/trunk/do...


And to add, the `name: Type` syntax is not just a modern fad, don't forget the OG language that did this: Pascal! (https://en.wikipedia.org/wiki/Pascal_(programming_language))


OK, I see. Yes, C and C++ syntax is definitely ambiguous without semantic analysis. Maybe this could also be explained somewhere.


There is a lot of Rust in the Carbon syntax.

`fn`, `name: Type`, `i32`, `->` for return type. `impl` as a keyword. `Self` as a keyword.

Nothing unique to Rust, but it's interesting to see.


C and C++ are infamous for being harder to parse than it's reasonable. C has lots instances of context dependent syntax where code changes meaning depending on what a name is.

You can parse Go without a symbol table, but not C, because stuff like

  item *a;
changes completely in meaning depending on the existence of a previous typedef for `item`.

C++ in order to not break compatibility with C has elevated this to insane extremes - there's so much ambiguous syntax in the language.

  x = b.get<item>(r);
This expression can either be a function invocation, if `item` is a type and a template method named `get` exists, or else it's a series of comparisons.

  item f(r, f);
is also ambiguous: if r and f are types, that's a function, otherwise that's a variable declaration. There's no way to know that unless you have a symbol table, which makes separating syntactical and semantic analysis impossibile at best.

The whole C++ language is full of similar ambiguities, and attempts to fix warts that backfired spectacularly. That's also the reason why writing a C++ frontend is a decade long endeavor that takes lots of people and resources. A full time team of 3 people can probably write a C parser in a few weeks in comparison, and C is big mess too, albeit a smaller mess though.


A blog post by Roman Elizarov (Kotlin designer) on the observation of types following the variable name [1]. It mostly states that having consistent length prefixes (fn, val/var) makes the code more readable than arbitrarily long type identifiers (in the eye of the beholder). Not scientific, but interesting.

[1] https://elizarov.medium.com/types-are-moving-to-the-right-22...


Fixed prefixes also make a codebase significantly more grep-able. Want to find the definition of the function named `foobar`? Search `fn foobar` and that will always match, no regex required.


Only if you use a code formatter (which you should) because otherwise there will be some 'fn foobar'(two spaces or more) just to annoy you..


While I think rust has some serious issues with syntax (and I love rust), the things you've pointed out here are not rustisms, but programming language theory-isms.

x:Type has been used in MLs (SML, Ocaml) and recently TS. Haskells uses x::Type. This usage dates back to at least the simply typed lambda calculus in the 1930s.

fn and fun are used in SML and probably elsewhere. fun is used in OCaml, as well.


From the "Why build Carbon?" section:

> The best way to address these problems is to avoid inheriting the legacy of C or C++ directly, and instead start with solid language foundations like a modern generics system, modular code organization, and consistent, simple syntax.

That last part seems to imply that the authors don't consider C++ syntax to be a good foundation for a modern successor language, so they chose to change it. As to why change it in a Rust-like direction I'd imagine that it's both because it's what fashionable at the time and possibly to attract people who are already familiar with that style.


They could have went the C# route, but didn't


Go authors put out a good post explaining the problem with C syntax and contrasting it with Pascal/Go/Rust syntax: <https://go.dev/blog/declaration-syntax>


Indeed, syntax only a mother could love. (Swift, I am looking at you, too.)


C++++ is C# (notice how the four pluses can be arranged to form a #)

C#++ would be the next one, but I'd pull what Microsoft did to Windows 9 and skip right to C##.


Knowing my music theory though, C## is just D.


not sure on this one, on my keyboard there is a halftone to get D from C so I believe it's more C# = D ?


I can't handle [] syntax for generics. It makes code unreadable as far as I'm concerned because it makes it twice as difficult to know if I'm dealing with an array indexer or a generic.

The rest I can excuse but [], I cannot. Same reason I won't touch Nim.


I can't handle <> syntax for generics. It's impossible to auto-pair and looks ugly.

The real solution is [] for generics and something else for indexing.


Using function call syntax for array indexing always made sense to me. Functions are mappings from inputs to outputs, and arrays are mappings from indexes to values.


I agree, but explain that to programmers who think parametric polymorphism is too complex.


Given that one of the languages that historically used the same syntax for array access and function invocation since day one is BASIC, I would dare say that it's not a complicated concept.


When looking for eleguant solutions: look at D, it's full of pearls: they use !() for generics.


Looking at this [0] news from about a month ago, the Chrome team showed that they are trying to bring Rust to the table. However, their wording was:

> The Chrome security team is working to make a cross-platform memory safe language available to Chromium developers. This document describes how to use that language in Chromium. The language, at least for now, is Rust.

Could this mean that the "at least for now" part might hint to the language discussed in this thread? Memory safety seems to be a core goal of Carbon [1]:

> Safer fundamentals, and an incremental path towards a memory-safe subset

This approach could make sense, looking at their large C++ code base in Chrome.

[0]: https://news.ycombinator.com/item?id=31830020 [1]: https://github.com/carbon-language/carbon-lang


Sure, the Chrome security team have got a problem, Rust has a solution, if somebody comes along with a better solution why wouldn't you.

But, right now Carbon doesn't have a better solution, it only has an ambition to build an incremental path toward being able to be a solution.

Importantly it has an ambition but it has no proposal for how to get there. There are other safe languages, but AFAIK none of them launched with "Eh, we'll do safety later, I'm sure we don't need to start with it". The ones I'm thinking of all began with their safety principles and then they added cool features which were inspired by those principles.

I reckon that even if that's not directly causal, it means the culture associated with these "We'll do safety later" languages doesn't prioritise safety highly enough. So ambition or not, they aren't going to put the hard work in to make it actually happen.


I work on the Chrome team. Safety is not the only goal for the codebase. There's also things like readability, maintainability, performance, and correctness. Finally, there's the need to chart a course towards how you get there.

Rust provides many of these, but not much of an incremental path there. Carbon aims to provide many of these, but not as much safety as Rust (likely, even in the future). The upshot: there are tradeoffs, and we'll need to watch our options and make continual decisions as to the best courses of action. Such courses may differ by area of the project; it's possible we might choose to write some hardened services in Rust communicating via Mojo with mixed C++/Carbon code.


It's a web browser. If safety is not the one overarching concern when developing a web browser, of all things, the Chrome dev team is clearly dropping the ball.

> Rust provides many of these, but not much of an incremental path there

The incremental path is provided since you can refactor stuff into Rust at a scale as tiny as individual functions. Even an "unidiomatic", C/C++-like interface to "unidiomatic" unsafe Rust code is better than the status quo where no safe subset of the language can possibly exist.

The Rust borrow checking approach does a very good job of establishing memory safety in a way that respects compositionality and module boundaries; most proposed alternatives do not engage with this obvious concern at all. Pre-"Modern" C/C++ idioms are inherently unviable because proving them safe is a global, program-wide concern. The Core C++ Guidelines developers are quite aware of this, which is why Guidelines-compliant code is quite rusty already.


You cannot refactor C++ to Rust at the individual function level without good Rust<->C++ interop. The Carbon docs go into this at a sufficient level of detail; TLDR, interop paths exist but are insufficient and are actively being researched (and Chrome devs are heavily involved in said research).

And as to your initial comment, of course safety is important, but if you don't understand why things like maintainability and performance are _also_ critically important, then your opinion is not sufficiently well-informed.


From the README:

> Interoperate with your existing C++ code, from inheritance to templates

Somewhere in the docs (can't remember where) is a link to a Rust/C++ interop project sponsored by Google, Crubit:

https://github.com/google/crubit/blob/main/docs/design.md

In case it's not clear, Carbon is also at the moment a Google project.

There's not much in the Crubit README (although there is a warning not to use it), but the docs directory has some interesting stuff:

> The primary goal of C++/Rust interop tooling is to enable Rust to be used side-by-side with C++ in large existing codebases.

https://github.com/google/crubit/blob/main/docs/design.md

It's not entirely clear whether this interop is to work as envisioned for Carbon (source level?) or some other approach.


Why "Carbon?" I just want to known if I a missing something subtle... or if it is just C, the symbol for carbon.

Developers need better and unique names for their products. I realize no one will care because no one is using it anymore, but Carbon was already taken by Apple for their Objective C API.[1] That both are pretty much in the same space with the same name is sure not to cause any confusion.

[1] https://en.wikipedia.org/wiki/Carbon_(API)


I'm not following you at all. Carbon was an API for Mac OS X, not a programming language. It was a C API, not an Objective-C API. (Maybe you are mistaking it for Cocoa, which required Objective-C on 64-bit platforms.) What relationship does this have to the new Carbon programming language?

Google did clobber another programming language with Go.[1] It was a truly a-hole move.

[1] https://en.wikipedia.org/wiki/Go_(programming_language)#Nami...


> I'm not following you at all.

Not even a tiny little bit? Really?

First of all, apologies for my boneheaded mistake, but let me try to see if I can show you the problem. If everything, everything there is, was named "Tom," you can see how it might get confusing determining just what is being talked about. That is hyperbole, but it should illuminate why it is ambiguous to call two things by the same name, but it is even more so when these two things with the same name operate in the same space.

But wait, you say, they are entirely different and unrelated! You think no one will be confused, because one Carbon is a programming language possibly based on C++ and the other is a C programming language API. Totally different, huh?

In fact, they're in the same space, programming. One Carbon is a language, the other a application programming interface for another programming language. They're also more or less dealing with the same programming language. I realize C is not C++, but without C there would be no C++, so they are very strongly related, along with ObjC. Fundamentally, these languages are all C, the newer ones using the same syntax but with advanced features not found in C. Still C, if we're being handwavy, and we definitely are.

So now, whenever someone mentions Carbon somewhere, like, "use Carbon!" That will be ambiguous due to this developer's unfortunate lack of nomenclature creativity.

Does that help at all to reveal why a new programming language based on C++ should probably chose a unique name rather than recycle a name that is still in use to refer to a C API? Since Apple already chose Carbon for their C API, and since it is so well established, even if it is no longer used, and since it is just not that old, and it when it was current, it was everywhere Apple, then it takes priority. So the arbitrary name this developer chose will inherently cause confusion, inevitably, because another Carbon already exists in the programming space.

Why not call it "Centigrade?" Or "lightspeed?" Because calling a C++-based language "Carbon" because it is based on C++ is not remotely as clever as Apple calling their C API Carbon because C is the symbol for Carbon, and not C++, which itself is a great name for a programminglanguage because it will never be confused with anything else, because afaik nothing other than C++ is called C++.

Let me know if you need more explanation why Carbon is an unfortunate name for anything new in the programming world. If it was, idk, the name of a publishing company or something, it wouldn't matter. It only matters because these things are in the same space.


+1 for this. It is a 'cool' name but this choice has problems.

As a practical illustration, it'll cause confusion when searching for 'carbon programming' or 'carbon apis' on $SEARCH_ENGINE. It isn't current but I would wager that there is still a lot of legacy Carbon code in the Macspace. This won't help the poor devs tasked with maintaining it. Unless they rewrite it in Carbon. See?

We in the trade also know how important it is to avoid giving things confusing names, and how frustrating it is when you find some garbled or ambiguous variable name while debugging or enhancing code. The same principle applies to naming your language too.


The Carbon API has been obsolete for years now; there's probably no danger of anyone getting confused when they search on Stackoverflow or Google.


I don't see the name explained anywhere in the excellent FAQ and other documents, but I wonder if it a play on Rust/Oxidize/Oxidization -> Carbon/Carbonize/Carbonization.

Only mention of those terms seems to be https://github.com/carbon-language/carbon-lang/issues/505 which seems like weak evidence against naming conjecture.


This is interesting. I've been relearning C++ after not using it for 25 years. It is hard, I was a Java dev for years, now Python with some Golang. Carbon looks much easier to take up.


There was apparently a presentation by Chandler today on Carbon: https://twitter.com/code_report/status/1549383435642445824


Why did they go to such lengths to obscure the fact that it's a Google project? I heard about it back when I was a Googler so I knew immediately what it was, but otherwise it was very hard to find any indication that this is a Google led project.


Must be avoiding the stigma of the broken promise not to be evil.


The thing is they have an excellent track record with runnig open source programming languages for over a decade. It's often claimed that Go succeeded only because it was sexy because it was Google. Why hide it here?


Being a benefactor doesn't trump being an abuser at the same time.


Am I the only one that doesn't like this direction for the syntax? This was the main reason why I never got into Rust, it was just too different without any obvious reason. It's strange that I actually enjoy working with type annotated modern Python though.

I'll now go away for a few years until the dust settles.


The obvious reason is that C++ syntax is difficult to parse, for both machines and humans.


What exactly you don’t like about it?


Just a lot of small things stand out:

- function and variable types are harder to visually parse for me. The keyword to declare a variable and the type put the varible name in the middle for instance.

- the "fn" keyword is terrible, just to save on characters. Honestly, any keyword should have vowels and is pronounceable. Python has "def", I would have been fine with "func" as well

- type annotation for templatization make is hard to read where the real arguments begin (for me). I puts info on the type in another place than the variable itself.

- "Mutable" feels like it should be all lowercase, to stand out.

In general, I know there are areas to improve on C++ syntax, especially on modules, but some choices seem different just for the sake of difference.


It's pronounced "fun", they just didn't want to make it too explicit because coding isn't always fun.


I always thought it was pronounced "ef'in". Like when you have to censor yourself in front of coworkers because of this ef'in code.


Then why not be consistent with "let" and "var", and actually use the vowel?

I also am sometimes not fun at parties.


Can C++ be machine-translated to Carbon? That would be useful.

Translating C++ to Rust is too hard - the underlying data models are too different. What would be really useful would be something that intelligently translates pointer-based C++ code into a slice-based language. Every place there's an unsized array, something has to figure out how big it is and pass that info around. In the original program, that information had to be present in some form. The trick is finding it. That's probably not out of reach for a static analyzer today. Especially if machine learning is used to help find the usual idioms of C/C++. First find, then check.


> Interoperate with your existing C++ code, from inheritance to templates

Very important. A C++ alternative and successor needs to be compatible with C++.

The rest of these so called C++ alternatives are either rewriting everything in their own language and causing chaos with their own incompatibilities with their language features and realising that it wasn't a good idea after all to do such rewrites after being sold vacuous promises and language feature snake oil, but only to show pretty syntax sugar.

Unfortunately, the hype squads will just attempt to drown out other alternatives like this one; even if it works with the existing C++ ecosystem.


It's a shame a stable ABI is declared as one of the non-goals, C++ is painful enough to integrate with other languages.


C++'s current unwillingness to specify a stable ABI (or even state that ABI stability is a goal) but also have the standard libraries avoid breaking the ABI is a worst of both worlds currently. And the refusal to break the ABI adds overhead to things like std::unique_ptr which isn't really great.

This paper goes into the details https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p18...


Also, the unfortunate associative containers and std::regex.


A stable ABI can effectively kill a language. https://cor3ntin.github.io/posts/abi/


(At Google, it’s definitely useful to others.)


Apparently this is a google or google-adjacent project. Google was one of the most vocal committee members in favour of breaking ABI stability, so it is not surprising.


Part of why google is unhappy with the committee is google’s desire to break C++ ABIs.


ABI stability is an unstated goal of c++ and holds it back by preventing meaningful change or fixing of issues. This is good.


Its github repo has a very interesting doc on when it's the right time for Carbon language to go public. https://github.com/carbon-language/carbon-lang/commit/b8750e...

> Broader field experience required

> Sustained interest to use Carbon from multiple organizations and individuals

> Prototype implementation

> Demonstration of potential

> Learning material

> Prepare for new contributions and feedback

> Launch event

> Relationship to C++

> Perception of ownership by a single organization

Is this post considered as a step towards going public?


It was made public at CppNorth, a C++ conference, earlier today. The page you mention is a bit out of date though, you can find the update to our plans here: https://github.com/carbon-language/carbon-lang/commit/4aa462...


A reasonable test of interop would be converting boost to carbon. Then you'd hit an enormous amount of edge cases and really be able to understand carbon's shortcomings, no?


Carbon Team, thanks for sharing, I will likely use this if I need to bridge with CXX libraries.

Also thanks for adding sum types, and pattern matching, is there anyway you could get the dart team on board? They seem to be averse to them and it's a big reason why I don't use the Dart/Flutter ecosystem.


> Also thanks for adding sum types, and pattern matching, is there anyway you could get the dart team on board?

I'm driving the design of pattern matching for Dart. The entire language team is on board with them and we have a design that's pretty far along. It's really hard to integrate pattern matching into a language that wasn't originally designed for it, so it's taken a while to settle on a syntax that makes sense, but I think we're getting close.

If you'd like to be involved, there is a category on the issue tracker for design discussions related to patterns and records:

https://github.com/dart-lang/language/issues?q=is%3Aissue+is...


Au contraire! We're actively investigating them here on the Dart team. We're finalizing the spec here: https://github.com/dart-lang/language/blob/98335746fb7f2dedf...


Seems neat, but it looks like the major thing is the syntax. Of course c++ syntax is insane and should be updated, but i cant shake the feeling that c++ developers secretly love it. Scanning c++ code for a seasoned c++ developer is like the bushmen of the kalahari scanning the african savanna.


Because we're scanning for predators?


Does it help in any way with memory management or it keeps that burden loaded in the developer's back?


This is the proper way to replace C++, being able to fully consume it

Just like Kotlin did to Java

The only viable alternative to C++ imo


I've seen this fn/var argument a few times, but its a really that much harder to write a parser that can intuit the difference between a function/variable/statement without explicitly putting function/var/let/etc all over the place?

I like the verbosity of a language like pascal, but I just find it jarring in a C replacement these days. (and rust's structure definitions/etc I just find needlessly verbose for the way I use C structures).

C is nice because its fairly easy to mentally parse (C++ less so) vs some of the alternatives, while still being extremely concise.


The C way makes it hard to find the name of the function (or even tell that it's a function declaration) when the return type is long.

    std::vector<std::unordered_map<std::string, std::pair<std::string, std::string>>> foo(std::vector<std::vector<int>> x, std::unordered_map<std::string, std::string> y);


I said C++ was diff, but how about:

  using namespace std;
  typedef vector<unordered_map<string, pair<string, string>> foo_bar_xref;

  foo_bar_xref foo( vector<vector<int>> x, unordered_map<string, string> y);
not sprinkling std:: everywhere helps with readability, but so does defining a couple types used frequently, especially if they are particularly verbose. And of course the parser doesn't really have a problem with this, so your editor should be able to find it with its 'goto declaration' function. I don't feel pity for people who chose to use editors/IDEs that make this difficult.


> not sprinkling std:: everywhere helps with readability

I agree, but most C++ programmers don't :/

> And of course the parser doesn't really have a problem with this, so your editor should be able to find it with its 'goto declaration' function.

C++ syntax is Turing-complete, so due to the halting problem, finding the declaration can take a long time, potentially unlimited in pathological cases.


The problem historically is that "using namespace" was undesirable in headers, since it would apply to anything that included them. And with the tendency for modern idiomatic C++ to be heavily templated, many function bodies have to be in the headers.


>C++ syntax is Turing-complete, so due to the halting problem, finding the declaration can take a long time, potentially unlimited in pathological cases.

Absolutely not unlimited time. Once it's parsed and semantically analyzed you can find the declaration in no time. Exactly like the compiler does.

edit: to clarify, I was not taking into account unbounded recursion on template instantiation, which should be limited in any case by the compiler.


You can use using = instead of typedef

  using namespace std;
  using foo_bar_xref = vector<unordered_map<string, pair<string, string>>;

  foo_bar_xref foo( vector<vector<int>> x, unordered_map<string, string> y);


> (and rust's structure definitions/etc I just find needlessly verbose for the way I use C structures)

Do you have an example? What's verbose about Rust struct syntax?


Tried to understand more with some basic examples. Wrote a getting started guide for fundamental syntax. Feel free to play around.

https://tipseason.com/carbon-language-tutorial-syntax/


If you enumerate all top level goals of the language, they all are satisfied by just staying with C++:

. Performance matching C++, an essential property for our developers.

. Seamless, bidirectional interoperability with C++, such that a library anywhere in an existing C++ stack can adopt Carbon without porting the rest.

. A gentle learning curve with reasonable familiarity for C++ developers.

. Comparable expressivity and support for existing software's design and architecture.

. Scalable migration, with some level of source-to-source translation for idiomatic C++ code.


Migrating C++ code can be a major pain in the ass "At Scale", and designing C++ templates code that works intuitively but isn't just simple type replacement is an even bigger pain in the ass. Hence, a language that doesn't include things like SFINAE.


Awesome! I was looking for an alternative to C++ that cleans it up!


Some quick notes about interesting features of the design:

* Source file encoding is required to be UTF-8. Strings are UTF-8. No apparent provision for binary strings, but I haven't delved into the string API.

* Retains the C/C++ definition of overflow of signed integers cannot overflow, unsigned can. That's o_O-worthy; the two should be aligned, and if they can't overflow, provide some form of wrapping integers as well.

* Yay, tuples.

* Struct type syntax is ... weird? {.name: String, .count: i32}

* Expressions. Partial order for precedence (i.e., a | b << c is ill-formed instead of being parsed as (a | b) << c or a | (b << c). The Rust-style cast syntax I think I prefer, but ^x for bitwise not is o_O, and if a then b else c feels odd for a generally C-syntax language to use. Similarly using 'and' for logical and instead of &&.

* var and let for declaring variables; one is constant, the other not--that's somewhat jarring. It seems that the : <type> is mandatory, and inferred type is : auto instead of letting it be omitted? Feels unnecessarily verbose to me.

* No goto, nor labeled break and continue. Huh. Also, 'for (var name: String in strings)' is again feeling unnecessarily verbose... (Can break break out of an if statement, or does it have to be a loop?)

* You declare "returned var c: Circle" instead of relying on named return value optimization. Again with the verbosity, although I haven't yet reached copy/move constructor stuff to understand how much automagic happens.

* The [me: Self] syntax is weird. I'd like something closer to the C++ deducing this syntax or Rust's &self/&mut self, where the type of the this parameter is specified via the first argument rather than what feels like a somewhat-out-of-bounds information.

* Mixin (aka multiple inheritance) is unspecified at this point.

* The keyword for enums is "choice"? Really?

* Name lookup retains the C/C++ rules of need-to-declare-before-use. Again, is this really necessary? It's fiddly...

* The [me: Self] syntax appears to be a specific instance of the generalized syntax for generics, but only for method parameters, because generic types use () instead. Again, why differentiate from the C-family standard practice of <> for generics? Also, the : versus :! in generics strikes me as overdrawing the weirdness budget one too many times. Props for using something closer to a constexpr if than C++ SFINAE; SFINAE is not a design model I would carry forward in any future languages.

* Stable ABI appears to purely be defined in C ABI terms. Sigh, can we get people on these language committees together to start thinking about post-C standardized ABIs?

* Async, coroutine, lambda stories are unclear.

* Error handling also unclear. (That's kinda important!)

* Not clear from the main design document is how the distinction--if any--between trivial/nontrivial copy/move types work. There appears to be an explicit move operator (with obvious syntax ~x), so support for nontrivial move or immovable is better than Rust already, but the avoidance of a NRVO setup still makes me wonder what the actual story is here.


> * Retains the C/C++ definition of overflow of signed integers cannot overflow, unsigned can. That's o_O-worthy; the two should be aligned, and if they can't overflow, provide some form of wrapping integers as well.

Agreed, they should provide three kinds of integers: signed as usual, unsigned like in C++ but for compatibility only, usage discouraged (unless your programming a clock) and "natural integers" which traps on over/under-flow in debug builds. And going from one kind to another kind should always be done with explicit cast (there is a proposal to allow some implicit conversions and it allows implicit conversion from unsigned to bigger signed :-( :-( )


Rust is good enough for HFT, Web rendering, and the Linux kernel but go ahead and add Carbon to the massive list of been-there-done-that.

Pro tip: Don't try to migrate an entire catalog of functions all at once to Rust (or pick your favorite C++ alternative). Pluck off individual pieces and use FFI to bridge the gap. Remember, work incrementally.


Based on my quick read, “Why build Carbon?” section calls out just 1 problem with C++ which is the developer experience. Agreed.

However Carbon seems to do a whole lot more than DevEx. It’s practically a whole new language that can interop with C++. I don’t see the point in that. Would be great if someone pointed out why I should change my mind.


They have a separate document about difficulties improving C++: https://github.com/carbon-language/carbon-lang/blob/trunk/do...


I had been thinking about something like this for a long time. There is just too much C++ code out there but C++ (despite many efforts by the standard committee) is still too verbose and hard to make sense of in many cases. So I thought "why not designing a new language that is compatible with C++?"


Creating a "memory safe C++" is not too hard if you think about it.

Here is my attempt, since 2011:

http://sappeur.ddnss.de/

Works nicely, unlike the Java Behemoth.


I know lots of languages are going with exactly saying the size of your numbers like "i64". In C/C++ you would say "int". Of course sometimes you want to declare and exact size but for a regular integer it seems simpler to say just "int".


A detail that strikes me as odd, and an explicit deviation from Rust, is to indent with only 2 spaces.

Dart does the same, but its justification is the (excessive) amount of indenting going on. It's often a struggle to match indents visually.


Why not use a text editor that renders indents as whatever you want them to be?


If you want to me to buy it, show me an automatically translated C++ snippet.


I wish they spoke more about their ideas regarding the memory model. For me, that's usually the first thing I want to know about a new language.

(I initially assumed they had some syntactic sugar for smart pointers.)


> Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should.

Enough said! I agree.



Not for gamedev or protocol stacks since no pointer arithmetic …


I'm not convinced that pointer arithmetic is necessary for gamedev: Odin (which has swizzle built-in!) has pointers but no pointer arithmetic.


Why do so many new and improved languages still lack tail calls?


It's moderately annoying to implement because it messes with the calling convention in architecture dependent ways. So it isn't an IR transform, it's N lowerings for N architectures.

Clang and llvm understand them, and you can require them from the front end, but the cost is some backends will hard error on them as unimplemented.


Just to clarify, are you saying that a language’s calling convention is implemented differently per architecture? Or is it that the tail call implementation needs to be implemented in different ways per architecture and that would mess with the required calling convention?


Calling convention covers where values are placed in memory (or stack or registers) by the caller so that the callee can find them. There can be N of these as long as caller/callee pairs agree sufficiently. The instruction set you're compiling to influences the cost of different choices, e.g. how many and which registers to use.

Tail calls mean reusing memory (notably the stack) and arranging for there to be no work to do between the call and the return. E.g. if arguments are passed by allocating on the stack, you can't deallocate after the call, so you have to make the stack look just right before jumping.

If you've got multiple calling conventions on your architecture, they each need their own magic to make tail calls work, so you might have 'fastcall' work and 'stdcall' error. Iirc I implemented it for a normal calling convention on one arch and didn't bother for variadic calls.

I suppose one could have a dedicated convention for tail calls as well, I just haven't seen it done that way. Usually the callee doesn't know and can't tell whether it was called or tail-called.


Their C++ code example doesn't compile, it has too many "}" marks in the circles definition call. Truly the mark of an experimental branch


I read the title and immediately thought of the Carbon API that was used in Classic Macintosh and legacy C/C++ GUI apps on OS X.

Obviously not…


It's a minor thing, but I do find it odd that they use PascalCase for function names when the C++ standard library uses snake_case.


Don't create a programming language to kill another language. That never works. Create a language that fulfills a specific purpose.


Carbon is intended to kill C++ in just the same way as C++ is designed to kill C, Kotlin is designed to kill Java, and Dart is designed to kill Javascript.

Which is to say, "Not at all".


Well, to be fair, "successor" can indeed be understood as "replacement."


This is rather deep, if you think about it. My gut feeling, coming from experience, is that the future lies with DSLs. A good programmer would mold the the general-purpose language he works with into a DSL fitting the problem at hand anyway (using whatever means available, e.g. macros or generics or both). Incidentally, the modern Lisps and C++ look the best in this regard.


please at least do what rust|go can: generate multiple platform code easily, i.e. make the build tool cross platform friendly.

one reason I use golang over c++ is the network, for c++ network socket Unix and Windows are totally different(winsock vs bsd socket), while golang works well at both OSes.


The experimental successor to C++ is Circle.

I guess now it is clear where Google's clang contributions end up going instead.


This is a really great idea, I love rust but it’s lack of good c++ interop has kicked me more than once


Well, `returned var` is totally unnecessary - why didn't they borrow the syntax from Go?!


https://github.com/carbon-language/carbon-lang/blob/trunk/pr...

> Disadvantages:

> - The syntax space of the return type is very crowded already, and this would add more complexity there rather than fitting in cleanly with existing declaration spaces within the body of the function.

> - Likely to be an implementation detail and valid to use on a function with a normal return type in its forward declaration. This isn't obvious when placed in the declaration position, and it might well be unnecessarily added to forward declarations just because of its position.

> - Removes the ability to both specify a specific return type and a pattern for binding parts of that object to names. Instead, the type must be inferred from the pattern. This ended up feeling like a deep and fundamental problem where we would lose important expressivity to disambiguate the return type from patterns. For example, when the return type is some form of sum type or variant, the pattern space is both especially attractive to use and fundamentally fails to express enough information to capture the type.


I'm still waiting on SPECS (1998) (https://users.monash.edu/~damian/papers/HTML/ModestProposal....)

But I don't think a re-sugaring of syntax is enough to make people switch.


This is the 3rd language I know about that google created. I have no idea why they don't use rust with their C++ code base and I don't know why they made this instead of use zig which actually brings something new to the table.

Overall I don't see myself using this. 0/3 google


There is an entire document which explains why they can't use rust or zig.

The short version is that for their particular usecase they need really close ties to C++.

"Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should. Unfortunately, the designs of these languages present significant barriers to adoption and migration from C++. These barriers range from changes in the idiomatic design of software to performance overhead."


I guess I'll rewrite my original comment here to clarify

I don't know why google created this language. It offers nothing new and I can't see why they'd use this over rust when rust can be used. I don't see why they use this over zig since zig can certainly be used where this language is meant to be used. Zig actually brings something new

Overall google is 0/3 on creating languages that people want to use. Maybe go is useful but I haven't seen enough proof


From the syntax alone it looks a bit more approachable than vanilla C++.


>"Carbon aims to fill an analogous role for C++:

JavaScript → TypeScript

Java → Kotlin

C++ → Carbon"


Wow, this one looks really interesting. It is 100% interoperable?


All these C++ "successors" that do nothing but change the syntax sugar and find ways to operate with C++ are really tiring to read about.

I seem to be the only one on the planet that doesn't think the language needs to be replaced.


AFAIK this is the only one what is compatible with C++ at the source level. As much as I like C++ I think it has a lot of accidental complexity built in and the syntax is too verbose at times. I also think that a lot of people (me included) do not want to switch to something like Rust immediately because of the existing C++ code and the friction of having to make it interoperable. So I believe this is actually a great experiment. Will see whether it works or not, but I hope it will.


I'm as tired as you by these cosmetic successors. But for my part I think C++ has already been replaced, and to great benefits. I couldn't thank the people building Rust enough.


I still haven't seen a cross-platform production level GUI app written in Rust.

All the time, it is C++ these companies use for these apps, especially having millions of users and generating multi-millions or hundreds of millions of dollars.


"I still haven't seen a production level numerical weather prediction app written in Rust.

All the time, it is Fortran the government uses for these apps,..."

I vacillate as to whether the best response is: "Who cares?" or "give it time"? First, if C++ actually is better for GUI apps, then more power to C++ (do you have some evidence this is the case?). That doesn't mean their aren't other niches to fill for other languages, like Rust. Next, Rust is a relatively new language. It may end up that it's really great for GUI apps, but again it doesn't have to be. It can be great at other things.


GP mentioned C++ being replaced by Rust - I'd say that a replacement should be as capable as the thing it replaces, so if that's the claim, then Rust should be at the very least OK for GUI apps. If it's not, then that's also okay, but let's not call it a replacement then. :-)

I'm personally excited to see how all those languages will influence each other!


> GP mentioned C++ being replaced by Rust

Then your beef is with the GP?

I'm also pretty certain we shouldn't be that pedantic about the word "replacement". Perhaps it's fine for a thing to be a "replacement for the GP" (which I think he/she is pretty clear about) or a "replacement for many uses" or for "all new uses", without being a "complete replacement"?

What's getting so weird about about the tenor of the current anti-Rust backlash is that 1) when the discussion turns to safety, it's "Hey, dude, don't harsh my buzz", but 2) when the discussion turns on the what the person meant, the Rust community's well-founded enthusiasm is interpreted in the harshest, most uncharitable, light.


You’re comparing framework for one language to another language.


I don't think I understand - where is any kind of framework mentioned? And even if frameworks were mentioned, I think it's a fair comparison to say "language A has battle-proven / easy-to-use / etc. framework to achieve X, but language B doesn't".


Since when is GUI part of language?


From the moment it can interact with the underlying operating system? C++ doesn't specify any GUI framework as a part of the standard, and yet people manage to write GUI apps. Rust should have the same capabilities of calling into the OS, but GUI apps seem to be slow to appear there - it may be because it's still early days, or because Rust isn't super pleasant to write GUI apps in.


Many cross platform GUI apps have been written in C, JS and Java too. The ability to write GUI apps does not really seem that unique to C++.


That’s not the claim being made.


Remove "cross-platform" and "GUI" and your post would still be correct.



The GUI is made with Electron.


Written in a mix of JavaScript and C++.


Perhaps also with a large portion of legacy code that they don't want to throw away and switch to something new?


We may need more time. IIRC, Firefox is incorporating some code written in Rust, and there's also Servo.


Mozilla laid off Servo's developers and a lot of the core Firefox Team 2 years ago. Servo is dead


But it is a cross-platform GUI app, which the post was about. Granted, it is (or was?) experimental (so not production-grade), but the PoC for GUI app in Rust should be there (assuming they didn't use something else just for the GUI, I'm not an expert on Servo).


> I think C++ has already been replaced

In which world has Rust replaced C++?


I feel the same way. Add to that the plethora of "C++ bad" memes that every software engineer is throwing around as soon as they get a whiff of anything remotely resembling C++ - I've made my peace with the fact that I'm in a tiny minority of people who think it's generally a great language that is worthy of further development and improvement instead of abandonment.


I agree.

Although I still think it should be possible to make some things obsolete in C++. The technical debt is real, and the language can hardly improve if old codebases are slowing down the evolution of the language.

I already wrote several times that I would like a new C++-like language, but without the complexity of C++. D, zig and rust are fine but they're not simple languages to use. I want the nice things of C++ (string, a few containers, a bit of syntax sugar, the most useful std stuff), with enough simplicity from python or C.

I just use the simple parts of C++, and I only want those parts. I just want the KISS simplicity. Carbon is not that, neither is rust zig or D.


Please let them make a proper interpreter for dev efficiency!


Doesn't look much like C++ successor to me, at all. What's up with all those fns, and functions starting with uppercase?


why camel-case functions tho :(


Indeed, from Python to Rust and newer C++ codebases you usually see CamelCaseClasses (or structs) and then snake_case for functions methods etc, which improves readability.

CamelCaseForEverything is such a waste, maybe they use it for implicit public/private as in Go? And maybe to please existing Go or Java users?


That's PascalCase. camelCase is like this.


I call them UpperCamelCase, lowerCamelCase, Upper_Snake_Case, lower_snake_case.


Requiring Google's CLA to be signed to contribute seems pretty limiting...


Have you thought about rewriting it in rust? :3


We already have this with an open foundation: Rust

No thanks


On the road map they have Broaden participation so no organization is >50%, so this has a good chance of not becoming Swiftoogle-Lang.

Also, under "Why not Rust?"

  If you want to use Rust, and it is technically and economically
  viable for your project, you should use Rust. In fact, if you can
  use Rust or any other established programming language, you should.
  Carbon is for organizations and projects that heavily depend on C++;
  for example, projects that have a lot of C++ code or use many
  third-party C++ libraries.


They have an entire FAQ subsection pointing out some practical reasons why one might choose Carbon instead of Rust.

https://github.com/carbon-language/carbon-lang/blob/trunk/do...


How straightforward is interop between C++ and rust?


fdgdfg



I'm surprised C++ is still prominent. I thought Rust would have taken its place by now.


Interesting. It will obviously take off _because Google_ but since they're heavily influcened by the Rust syntax, why not just learn Rust instead.

They'll need to get it into Compiler Explorer so people can really look at codegen rather than porting small programs.


> but since they're heavily influcened by the Rust syntax, why not just learn Rust instead.

That's pretty extensively covered in the link, but here's a relevant snippet:

"Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should. Unfortunately, the designs of these languages present significant barriers to adoption and migration from C++. These barriers range from changes in the idiomatic design of software to performance overhead."


> since they're heavily influcened by the Rust syntax, why not just learn Rust instead

Interestingly, when looking at their code samples, the vibe I get is more "Go++". Using `var` for variable declarations, letter casing for visibility, explicit returns even at the end of functions, using the "package" keyword for namespacing, etc. I do see some superficial syntactic similarity to Rust, like using `fn` for functions and `->` to annotate return types, using `:` for type annotations for variables, and semicolons seeming to be required at the end of lines, but overall it doesn't really _feel_ that much like Rust to me, I think due to how imperative it seems. Given the use of `class` and `let/var` seeming to be const versus mutable bindings, I'm wondering if the Rust resemblance is actually just transitive through more of a resemblance to Swift, although I don't know Swift well enough to know if this is an accurate explanation.


> like using `fn` for functions and `->` to annotate return types

`->` syntax is included in C++11 standard, named "trailing return type", but its adoption seems to be very slow.

`auto f() -> int { return 42; }`


Interesting, I didn't know that!



tremendous!


As stated in their goals, they want Carbon to be semantically compatible with C++ so Carbon code can use (automatically rewrite) C++ libraries.


C++?? versions are the experimental successor to C++. A new language that the old C++ compilers cannot compile every 3-4 years.


Backward compatibility works the other way: new compilers can still compile old code.


No. No. No.

This reminds me of Dart - another big company thinking they can invent a new language to tackle old language problems.

This is not the way to improve C++ - because this path already exists, either as D or if compatibility or OO model isn't an issue, Rust.

C++ has been improving slowly but surely - just remember that 90% (heck even more) of C++ issues stem from the hell that's compatibility with C and that's both a blessing and a curse.


Google is in the business of making software in exchange for money and they have internally determined that the way C++ is and the direction C++ is headed isn't good for them. They don't "care" about C++, they care about running a business. They have determined that C++ as it is isn't sufficient and the direction its going is not acceptable to meet long term goals, whether that be implementing new products or maintaining existing software. Google has wholly unique software organization problems that very few other companies probably have.

If you've seen their cppcon talks, you'll see that the changes they want are not what the committee (or even the general c++ community) focuses on (for example, I couldn't care less about ranges). For example, they want to make changes to unique_ptr because the inefficiencies in the ABI cause performance issues and probably can be tracked to additional cost per month. They couldn't make these changes without changing ABI. At google scale, small inefficiencies add up so fast.

I have no doubt they looked at D and saw that it wasn't feasible (for whatever reason) and they already stated why Rust is not sufficient. They wouldn't make this decision arbitrarily and we are not privy to all the meetings that went into the decision making process.


Do you mind sharing a reference for the changed Google wanted to make to unique_ptr? I tried searching but couldn't come up with the right set of keywords to find anything relevant. TY!



Same big company actually - Dart and Carbon are both Google projects


I just do NOT feel like using a language, that has explicitly stated, that you should not start a new project with it. Very few legacy projects will consider using Carbon, I would assume (If you have worked on one, you probably know that "legacy" is usually more than just the code) and then there are so few projects left that I see no chance of this really taking of. And in that case it will probably die soon as well, like many (most) other Google projects.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: