I'm heartened by the success of the Typescript model of improving things when there's a deficiency/problem.
The alternate approach (which is extremely popular unfortunately), is to throw it all out and rebuild everything from scratch. I guess it's fun and exciting, which attracts developers, but it takes a long time to achieve any level of maturity and are hard to sustain (the people who are in it for the fun and excitement will move on before too long).
The only problem is you run the risk of implementing a still developing standard that might be vastly different than the finalize version. You then have code and programmers trained in doing something the old "wrong" way.
This happened in TypeScript when they added support for an early version of decorators and now the TC39 version (which is still only Stage 3) is just different enough to cause issues.
They have been labeled experimental the entire time and off by default. That specific TC39 proposal is also approaching a decade of feet dragging and has been exceptionally slow to make progress. In Typescript if you aren’t opting into using decorators through a library that forces you to use them, they’re entirely avoidable (and some would argue, decorators make code worse and this failure to launch is a good thing).
I think the Typescript creators themselves learned a lesson with decorators and enums which is why we haven’t seen other JS language proposals get added until they’re actually in the process of being adopted (e.g. matchers).
> That specific TC39 proposal is also approaching a decade of feet dragging and has been exceptionally slow to make progress.
So... like the pipeline operator? Pretty much given up on that being included now. They can't make up their minds over two competing syntaxes (and, FWIU, it has taken them years to decide).
1. Enums are one of the few TypeScript features that isn't a type annotation that can simply be erased. Enums emit code and don't have an equivalent JS feature.
2. Const enums are unsupported by some bundlers/build tools, and so people try to use them and then got burned at build time.
3. The use cases covered by enums are often better served by union types.
None of the above is necessarily fatal for the feature. Certainly people used to enums in other languages still like them. But all 3 combined and the general recommendation these days ends up being just don't use them.
Yeah, tools can add support for them, but they're fundamentally a whole-program-optimization in a build chain that's 99.9% file-by-file.
Supporting const enums will, by necessity, greatly reduce the maximum performance that toolchain can achieve, because you have to evaluate the entire project to tell whether `Foo.BAR` should be left alone or replaced with a constant defined elsewhere. And in the worst case of an ambient declaration, "elsewhere" could be any file in the project.
You mean the feature that was gated behind an opt-in switch called `experimentalDecorators' and documented from the start with "Experimental support for decorators is a feature that is subject to change in a future release"?
This was only really added to appease Google and Angular 2.0.
Yeah Microsoft didn’t want AtScript to steal the little thunder TypeScript had at the time, but we’re still left with that one experimental feature all over Angular codebases.
The TS approach is really dumb for C++ and completely unnecessary. Compile times are long enough as it is without another meta compiler on top.
You already have a compiler. Just make it emit binary-compatible code for the new dialect. You have modules now so you don't have the problem of supporting mixed-dialect headers.
The TS approach isn't about having a meta compiler. cppfront is just a temporary stepping stone, just like cfront (the original C++ compiler) was a temporary frontend for C++. Eventually the goal would be to add support for the new syntax to GCC, Clang, MSVC
Not just that, but a big part of Herbs feature set are things that can work as proof of concepts and proof of usefulness that could be presented to the C++ committee as actual features that can be added to C++. Many of the features of cppfront have already been proposed to committee.
This was dead when I saw it, which is weird because it's factually true. So I vouched for it.
Herb Sutter proposes a bunch of stuff to WG21 (the C++ committee) and most of it goes nowhere. We're not talking about two proposals here, maybe a dozen is closer. Most of that stuff from many years is in Cpp2 because it's Herb's language so they can't stop him.
The one thing Herb proposed in that time which got into C++ was the spaceship operator <=> which is basically like Rust's PartialOrd trait, you write one operator for your type, and for any pair of values (this & that) you decide whether this was Greater, Equal or Less than that -- or none of the above. As a result the compiler can implement all the obvious comparison operators like <= or == or > using that one piece of code you wrote, and it's easily able to be consistent. This is very nice, and it's in C++ today, but it's also in Cpp2 of course.
It's still worthwhile even if you have to fudge module boundaries or skip the hard parts. Kind of like having "safe" Rust wrapping "unsafe" C boundaries doesn't make the rest of Rust not worth it. Lying about the type internals of your function is still preferable to the wild-wild west of JS.
That's why incremental adoptability is so powerful. Slightly ironically that's exactly what (AFAIK) made C++ so popular: You can start using it without really giving up anything. And ideally once you see the benefits you'll be hooked and want to (incrementally) use it more and more throughout your code.
And exactly what makes C++ codebases so hard to clean up from C idioms, as many developers to this day apparently never went beyond changing the file extension, regardless of how many security advocacy we keep telling them.
C++ adds so many additional security footguns over C, that I find this line of reasoning hard to accept. The problem with C++ is not that people are using C constructs with it, the problem is that the language design itself is deficient.
Are you aware of any systematic review that shows evidence that C++ is safer than C?
The rate of safety defects between major C and C++ projects appears similar at first glance, and both way worse than managed languages or rust.
I have written a lot of the same kinds of data infrastructure software in both C and C++ and other languages, so comparison is somewhat reasonable (unlike comparing e.g. web browsers and systems software). The rate of defects is much lower in modern C++ versus C, and the types of bugs have changed too, but only part of that can be attributed to safer constructs in C++.
C requires many times more lines of code than C++ to do the same thing. AFAIK there is considerable academic evidence that bug counts roughly scale with lines of code, so languages that are precise and concise naturally reduce total defect rates. Minimizing defects requires maximizing expressiveness. The ratio of LoC between languages to express the same thing is not constant, it depends on the application.
The kinds of bugs I see in C++20, given the type of software I work on, are almost entirely the same kinds of logic and behavioral bugs that occur in every language. This is why Rust isn't as popular as one might expect for systems software: memory safety bugs are not a thing for many code bases, and Rust requires many more lines of code compared to C++20. I am sure Rust will become more economical over time but for now it is pretty verbose and has pretty limited metaprogramming functionality.
C++20 is remarkably safe and concise if you take full advantage of the type system.
> C++20 is remarkably safe and concise if you take full advantage of the type system.
Against what sort of laughably low bar are you measuring to make C++ "remarkably" safe ?
This is a language which delights in deliberately adding more footguns, on the rationale that well, it's less safe so surely it'll be faster right? No need to measure, no need to investigate what actual performance optimisations might somehow be available if we allowed the dangerous behaviour, no, just mandate it and YOLO.
C++ 20 introduces std::span. Now, std::span is basically a slice type, Rust's [T], and to some extent it's remarkable that C++ didn't have a slice type, but that's C++ for you. What's fascinating is that in 2020 I remind you they standardized a type which deliberately has no safe way to use it. It was proposed as a safe type, and WG21 stripped out the safety on the rationale that now it's faster (see above) then rejected all attempts to add the usual half-arsed C++ safety features to the type now that it wasn't safe by default.
Let me quote a C++ proposal paper (this isn't some hit piece from Rust fanatics, it's a serious proposal to the ISO working group) P2821 on std::span:
"Ultimately, this becomes a stereotypical example of how C++ traditionally handles safety. this example gets to be pointed at for years/decades to come. All of this could have been avoided"
You can replace almost everything in C++ with stricter implementations of your own design if you don't like the behaviors or guarantees of the standard/default implementation. Many people do because the language is very amenable to it and the codegen is usually optimal. Living entirely within the standard library and the constraints it imposes to support backward compatibility is a choice, not a requirement. The standard design is always going to be less than ideal for some subset of applications, it is an unavoidable tradeoff.
The metaprogramming facilities of C++ are strong enough now that there is little that can't be customized without macros in a way that is nearly transparent.
The "Eh, we'll just throw it away and build a new one" attitude in C++ is part of how you got into this mess. Slices are a vocabulary type, without one what can you do? Well of course you just pass raw pointers around. Huh I wonder why we're having so many safety problems in C++...
Yes, it starts by not coding in C++ as if it was C.
Use templates instead of macros, RAAI instead of gotos, namespaces instead of prefixes, bounded checked strings and arrays instead of raw pointers, new instead of error prone sizeof with malloc(),...
Yes, this is a common theory, but I don’t see evidence for it in the hard numbers. Taking two of the most popular projects in each language, with a comparable LOC count, the numbers look surprisingly similar year over year:
There’s some variability year over year, but if anything C appears to have a slight advantage over C++ in terms of memory corruption (840 vs 1004), with essentially the same number of overflow errors (322 vs 328). There is no comparable rust project, but initial evidence from the asahi gpu drivers hints that memory corruption errors are fundamentally eliminated.
This is obviously not accounting for confounding factors, hence my request for any peer reviewed evidence for the security claim. Until then, the facts don’t seem to be supporting it.
Are you seriously putting the Linux kernel forward as a typical C code base? Isn't that a bit like selecting the example of an F1 car to show that cars are usually at least as fast as motor cycles?
Picking Chrome and Linux as examples is good for a couple of reasons. No one will complain that the codebases are small or were written by “bad” programmers who didn’t take enough care to write good code.
Because that’s the only thing holding back some languages right? If only the programmers using them would get good, use static analysis tools then bugs would be eliminated.
Linux compiles and runs on many more architectures and hardware configurations than chrome, and it supports a frankly ridiculous number of peripherals up to and including the most complicated gpu accelerators ever made.
Chrome is indeed complex, but on what do you base your “vastly more” assertion?
You're assuming that all, or even most, C++ codebases use proper RAII and don't run wild with the vast amount of features in the language and standard library.
The trouble with bounds checking via generics/macros is that the compiler doesn't know how to optimize out the checks. Most bounds checks can be optimized out of inner loops, where it really matters. But if the overflow test is just ordinary code, the compiler can't do that.
You also want to hoist bounds checks and do them early. Often, one check at loop entry can eliminate the bounds checks for each iteration. But the language has to allow an early fail.
They have bound checking through the `vec.at(index)` method, not through the indexing operator `vec[index]`. Most people won't even look at the `at` method, they will just use the indexing operator since that's supposed to be the "default" .
This is differently bad because it may mean tight loops are enormously slower under debug for no good reason, it's an adverse consequence of the C++ "wrong defaults" problem. Because they're defaults, we can't detect whether they're what was specifically needed or just nobody cared, so we guess nobody cared and act accordingly.
I'd argue exactly the contrary. In Rust you went out of your way to write the unsafe thing because that's specifically what you intended. As a result there's no need to try to guess whether this code doesn't care, if it didn't care it would be checked because that's the default.
The alternate approach is also popular because there is no success in the typescript model, as in critical deficiencies get unfixed for decades (with some of the same "not fun" challenges )
And, in fact, many of the things people hate about C++ are the result of exactly the kind of compromises you have to make to apply a TypeScript-like model to language evolution. C backwards compatibility is what made C++ successful originally but is now one of the main things that makes C++ crappy today.
> Every .js file is a valid .ts file, add 1 class and see benefit
> Lowers to standard .js, 100% seamless compat with all JS libraries
> Cooperates with the standards committee (ECMAScript)
> Brings evolution proposals to standards committee
> Leverages entire existing ecosystem - works with all JS implementations & tools
Carbon is an example of the "Dart plan". Some quotes from Carbon's "Interoperability philosophy and goals" page (my emphasis):
> The C++ interoperability layer of Carbon allows a subset of C++ APIs to be accessed from Carbon code, and similarly a subset of Carbon APIs to be accessed from C++ code.
> The result is that it will often be reasonable to directly expose a C++ data structure to Carbon without converting it to a "native" or "idiomatic" Carbon data structure. Although interfaces may differ, a trivial adapter wrapper should be sufficient.
> There should be support for most idiomatic usage of advanced C++ features. A few examples are templates, overload sets, attributes and ADL.
FWIW, I disagree about Carbon following the Dart plan (Carbon lead here).
Carbon is following a plan much more analogous to Kotlin -- we even say that on our site very explicitly.
The "subset" of C++ APIs you emphasize is only about there existing some long-tail esoteric parts of C++ that may be used rarely enough to not worry about. Everything that people use we'll need to support here. We think about interop constantly and are designing it into every aspect of the language.
Sure, there might be other interop stories that are a better fit, and Dart is a pretty unflattering comparison (perhaps chosen intentionally). But within Herb's dichotomy, I think Carbon falls into the "Dart" category, since the "Typescript" category is rather narrow.
If you're going to create a dichotomy between two languages, I think we're dramatically closer to TypeScript.
The whole point of Carbon is to integrate into and re-use an existing ecosystem of software written in C++. It's as far from the Dart approach as it can get without literally being a TypeScript style approach.
Ultimately, this dichotomy doesn't help discuss Carbon. I think it is useful for looking at Rust (until/unless Crubit or something similar radically changes its interop story), Go, and many other languages. But not Carbon IMO. It loses all of the important nuance. And there are important and meaningful differences from TypeScript's approach that we've talked about since announcing Carbon, but they don't make it anything like Dart's strategy.
The dichotomy is relevant if you are of the opinion that some of the foundational design choices of Carbon makes it less viable as a C++ successor based on the similarity of those choices to other projects that were struggling due to this. The talk is arguing for the validity of the construct with various examples, not just Dart.
The construct can be useful without being the end-all answer to what should be done and I would definitely advice you to watch the full talk if you haven't yet. And the consider how each point may apply or not apply to Carbon. Don't just dismiss it on the notion that Carbon is different from Dart.
Actually, I'd encourage you to reach out to Herb Sutter and ideally meet up in person to discuss the matter. Your goals are aligned in many ways and while you have different approaches a lot of good can come out of sharing ideas.
No, it says "No one else has tried the TypeScript plan for C++ yet". Which is true; Carbon isn't a compiler/transpiler to C++, it's a whole new language, albeit one with strong C++ interop.
"Strong C++ interop" can also be achieved via libraries from an existing language, as with the Rust "crates" cxx, autocxx, crubit. So the jury is still out as to whether an entirely different language, namely Carbon, will be useful. OTOH, cppfront can be seamlessly transpiled to C++ on a file-by-file basis which can also be a desirable feature wrt. incremental adoption.
My understanding is that none of those crates support both instantiating C++ templates from Rust and instantiating Rust generic functions from C++, let alone stuff like implementing a Rust trait with a C++ object from C++ code. These things would be _very_ difficult to do because of the impedance mismatch between the languages, and so there's space for a language with "even stronger" interop.
Carbon is disappointing and feels like a missed opportunity. It feels more like a syntax refresh. C/C++ desperately need a safety overhaul. I work in the embedded functional safety space. Rust has shown it can be done. Not having lifetime/borrowing support or something that achieves the same thing from the start gives the impression nothing has been learned in the last decade.
Google claims that Carbon is the future internally at Google for a safer C++, and Google never approves Rust for internal usage. Then I see multiple public announcements of Google rewriting something in Rust (such as the recent Binder rewrite). I'm quite confused.
Completly false, the Carbon project folks are quite clear it is an experimental language, and anyone not having legacy C++ codebases should use Rust, or a manage language, if their requirements can cope with automatic memory management.
Android, ChromeOS and Fuschia already use Rust, and Chrome is in the process of getting the first Rust libraries integrated.
For the most part, Rust projects at Google tend to be things that are relatively self-isolated (like the Binder rewrite). Google has a huge monorepo with tens of millions of lines of C++ code, much of which is highly interlinked and interdependent. The goal of Carbon is to help move that code to a simpler and safer programming model.
Such a shame it isn't open source, so it's really impossible to eyeball QoI without actually investing a huge amount of time into the language. (Plus the usual bad things about closed source.)
I have no doubts about the qualifications (or even intentions) of the author, but one feels that a language meant for serious things should have an open implementation and/or standard. Of course, I realize this may not align with Baxter's goals, but it is going mean ~0 adoption.
There's too much pining for Rust syntax and non-conservative changes or additions that aren't consistent with the rest of C++ syntax.
It repeats C++'s mistake of having too many non-convergent features and adds as much new syntax on top as C++ already has. This is a problem because to be able to read code, you have to know all language features at least superficially.
Instead of doing what plain C++ has been trying to do and stabilize on a smaller feature set and (at least verbally) deprecate legacy cruft, this is going the exact opposite route.
From my point of view, Cpp2 gets sold this way, due to the conflict of interest that it is being proposed by the ISO C++ chair, and naturally the story can't be that it is yet another wannabe C++ replacement like all the other ones.
Herb is infinitely more qualified to speak on the matter than I am, but I don't think I understand the point. We've had multiple decades to bring C++ under control for safe general purpose usage and it's still very loosey-goosey, to the point where people are rightly afraid to start work in it not because it's a bad or deficient language, but because keeping codebases sane is challenging and requires a lot of discipline even from experienced programmers with an eye for detail.
I think C and C++ are two of the most interesting languages out there. I think it's rarely worth bothering rewriting existing C/C++ software in something else for the sake of safety. I think there's an appreciable amount of applications where C/C++ are still justified. I don't think this will end up being much different than the last two decades worth of attempts to put some guardrails on them.
TS is relevant because JS has severe functionality deficiencies and you've historically had no other choices when writing for web browsers. C++'s deficiencies are less with the functionality of the language itself and more with properly and safely using it, and you're very rarely forced to use it.
It's not at all hard to build a sane codebase in C++ if you are a group of experienced C++ devs with an eye for detail. Not saying mistakes never happen, but most of the time the tools available are sufficient to build correct code if you know how to use them right.
However, the problems abound when you have less experienced devs in a less structured environment and working with legacy C or C-style code, the latter often being the reason for why C++ was used in the first place. For this, something like cpp2 can be revolutionising as it makes it easier to write correct code and provides mechanisms to disable many of the unsafe patterns.
I think the reality is that companies like Microsoft, Google, Apple want to write C++ but safer. The reason you're seeing Herb and Google's Carbon on the "ok fine well make a new language which has C++ interop instead of fixing C++" is because the C++ standards have been resistant to evolving the language into something that those companies want, to the degree that they may end up adopting Rust despite the absurdly high migration costs just because C++ refused to evolve into something those companies could use safely (even with people from those companies on the committees advocating for it).
Carbon's initial announcement was vague about how safety would be achieved. The late 2023 talks are clear that (a) It will be bolted on afterwards rather than foundational, which should be a major red flag and (b) It will not include some key things which Rust makes safe because they can't see how to do that at all in their design.
Herb's Cpp2 announcement on the other hand was clear from the outset that Cpp2 is not safe. It's aimed to be fifty times safer but that seems untestable, maybe even meaningless.
The immediate trigger for Carbon was P2137. Basically P2137 says "C++ should prioritise performance over safety, safety over compatibility" and WG21 is like "No, absolutely not". That's a set piece, nobody was astonished this happened, but getting it down on paper avoids executive argument. Google could have spent six years convincing non-expert people that C++ really isn't going to deliver, or it could secure a piece of paper which says they don't even want to and eliminate that whole discussion.
Apple have pretty clearly settled on Swift. I'm not convinced they can write all their bare metal stuff in a Swift dialect, or that they'll be able to in time to not need anything else long term, but clearly the vast bulk of new work at Apple will trend to Swift. Apple are quite good at single minded and "Write all new code in Swift" is a single minded idea. I have no idea why anybody would buy from a company like that, but they're very popular so what do I know.
Microsoft are much less single-minded. I doubt they could settle on Swift (or Rust, or even say sticking with C++) as a company wide policy even if they wanted to. But equally they're not interested in finding themselves as "last man standing" for C++.
> It will not include some key things which Rust makes safe because they can't see how to do that at all in their design.
To be fair, Rust does have an unsafe superset and the interaction between "safe" Rust and unsafe code is quite non-trivial. It's not safe to call any part of safe Rust from an unsafe context unless the extra preconditions that safe Rust expects (as documented in the 'Rustonomicon') are proven to hold. This means that, e.g. much of the Rust stdlib and 'core' code might not be practically usable from unsafe code written in either Unsafe Rust or C/C++, whereas the idiomatic C++ counterpart might be. There is some effort underway to fix this where it matters, but these are not easy questions to address.
I don't really agree with how you represent the difficulties of unsafe Rust.
First off, unsafe Rust is not meant for writing application logic. It should be isolated within data structures, algorithms, or other abstractions exposing safe APIs.
Secondly, what you say about calling into safe Rust from unsafe contexts just doesn't sound correct. It seems like by "extra preconditions" you're talking about the requirements placed on references: that they must be initialized, non-null, and for &mut, unaliased. But these aren't requirements for calling into safe code, these are requirements for dereferencing raw pointers.
You might also be talking about the issues around moveability and Pin. But these are also not about calling into safe code, but about representing your type correctly (making certain actions only possible when pinned or whatever).
And then you talk about std and view not being practically usable from unsafe Rust, and this just doesn't align with my experience at all.
It's really not that hard to get unsafe code right (Miri is an awesome tool), and it's also not difficult to avoid unsafe code entirely if you're not comfortable with the requirements.
> these are requirements for dereferencing raw pointers.
You can use the unsafe read() and write() (and similar) functions to do things with raw pointers that would clearly involve dereferencing in C/C++ (including working with aliased pointers or 'pinned' data or writing to uninitialized memory), so I don't think this is correct from a C/C++ point of view. What Rust calls dereferencing is explicitly driven by the requirements placed on safe code; the two are effectively one and the same.
The reason Carbon exists is because Google has a huge C++ codebase that they're stuck with, and Rust doesn't have a good C++ interop story. They don't really want "safe C++"; they want Rust. But because they have to interop with their existing C++ they're having to make a language similar to Rust but follows the C++ model closer (e.g. having move constructors).
I don't think it's really any more difficult to build sane C++ codebases than any other language. The issue is not sanity; it's safety.
It's extraordinarily difficult to build a significant C++ codebase without segfaults and UB. You end up wasting a huge amount of time debugging that stuff.
Any time I've lost fighting Rust's borrow checker has easily been paid off by not having to debug segfaults.
Not my experience at all. I work on very large C++ code bases and haven’t had a bug in production for more than 5 years. The trick is solid tests and using tools like Valgrind/Helgrind to remove all memory issues.
I wish there was a clarification of the differences between the “Dart plan” and the “TypeScript plan”. If I had to guess I’d say the Dart approach is a whole new language that transpiles to C++ while the TypeScript plan is one that augments the current language with useful additions.
> Both plans have value, but they have different priorities and therefore choose different constraints… most of all, they either embrace up-front the design constraint of perfect C++ interop and ecosystem compatibility, or they forgo it (forever; as I argue in the talk, it can never be achieved retroactively, except by starting over, because it’s a fundamental up-front constraint).
> cppfront is on the TypeScript plan:
> full seamless interop compatibility with ISO Standard C++ code and libraries without any wrapping/thunking/marshaling,
> full ecosystem compatibility with all of today’s C++ compilers, IDEs, build systems, and tooling, and
> full standards evolution support with ISO C++, including not creating incompatible features (e.g., a different concepts feature than C++20’s, a different modules system than C++20’s) and bringing all major new pieces to today’s ISO C++ evolution as also incremental proposals for today’s C++.
Is there a paper for this? The video is an hour and 35 minutes.
Edit: found the Github repository.[1] "Where's the documentation?
I'm not posting much documentation because that would imply this project is intended for others to use — if it someday becomes ready for that, I'll post more docs."
Yep. I don't believe it's mentioned in the linked article, but Herb has mentioned in other articles/talks that cppfront is intentionally named after / following the same model as C++, which originally had a C++ to C transpiler named cfront[0].
C++ adds class and other features to C++, and this is like TypeScript to Javascript, but what he supports is simplicity and safety thank to a TypeScript approach. Not what C++ brings to C !
The article assumes that the reader is familiar with what is meant by the “Dart plan” and the “TypeScript plan”. I only have cursory knowledge of Typescript and know nothing about Dart.
Others in this thread already went into more details about this, but it doesn't help me much as long as I don't have an idea what the basic difference between both approaches is.
Dart was made by Google as a new language for the web. In the beginning, it was already quite different from JavaScript at the time, and crucially the main way to use it is through the Dart VM (that was embedded into a build of Chromium). It did have compilation to JS, but AFAIK never caught on. In the end, Dart diverges from JS so much so that the split in ecosystem means it's not viable for the web anymore.
TypeScript, on the other hand, has always set its goal to be a superset of JS and to transpile to JS only. This means a more familiar syntax, as well as (practical, IMO) design choices that ensure higher compatibility with the existing JS ecosystem. There is no TypeScript without JavaScript. Its development coincided with that of VSCode, and arguably is one the main facilitators for the latter's feasibility.
I have to say Microsoft's grand plan of VSCode and TypeScript has to be one of the most astonishing software revolution in this decade. The foresight and acuity of Erich Gamma and Anders Hejlsberg is just amazing.
I can't think of anyone besides Anders Hejlsberg that has a better track record with programming language innovation. At least 3 different languages that have all been commercially successful.
It's tricky to compare. Anders is involved in a bunch of important stuff, but it's not as though it's all successes either. He's certainly important, but I don't know how to compare "track records" on "innovation" because that's so vague.
One of his projects at Microsoft was J++ which is basically own-brand Java. Doubtless that's great material for learning to design C# but nobody is going to claim J++ was a success.
TypeScript deliberately takes a "good enough" approach to improving JavaScript, instead of designing an ideal but incompatible approach. For example, its handling of function parameter bivariance (https://www.typescriptlang.org/docs/handbook/type-compatibil...) is unsound but works much better with the existing JavaScript ecosystem. By contrast, a more academic functional programming language would guarantee a sound type system but would be a huge shift from JavaScript.
By analogy, Herb Sutter is arguing that something like the C++ Core Guidelines (https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines), with tooling help in this new Cpp2 syntax, can bring real improvements to safety. Something like Rust's borrow checker would bring much stricter guarantees, backed by academic research and careful design, but would be incompatible and a huge adjustment.
This video is about cpp2, a proposed new syntax for C++ by Herb Sutter, a famous C++ expert.
"The goal is to address existing problems in C++ by embracing the solutions and guidance we've already de facto adopted, and to have to explain less rather than more." [1]
I found a repo with some examples of the syntax [2]. For example:
It's not about that, it's about the plan towards wide adoption. TypeScript's uncompromising embrace of existing JavaScript semantics, patterns, and quirks has been key to its growing popularity.
After the talk, the activity on it is nearly zero since then, in a world where ideas are cheap, show me the code, this does not seem promising to me.
On the other hand, carbon has been very actively developed since its announcement, which also claims to be a typescript-for-c++. Now it also wants to be a memory safe: https://www.youtube.com/watch?v=1ZTJ9omXOQ0
I'm forced to learn rust but I hope carbon will take off ASAP so I can be more productive and my (carbon) code can interface with c/c++ code easier than rust.
But Carbon doesn't have a working compiler or prototype version I can download and try out does it? Compared to cppfront where I have downloaded, built, and compiled / run some demo code.
what about nim, already works great with c++, easy to integrate with existing c++, python style syntax, seems like a really good, production ready option to me
C++ only needs 4 things: gradually-introducable memory safety, static reflection, first-class compile time string manipulation, and adoption+refinement of its modules feature.
Considering that Chromium, being one of the most heavily statically analyzed code bases, is full of commits which fix memory leaks, your comment makes absolutely no sense.
Chromium is a massive project spanning decades of intense active development that intricately connects just about every domain of programming.
A few leaks fixed per month is nothing. Further there is no major language promising no leaks once you include things like retained references to garbage and reference cycles. Leaks are one thing modern C++ solves pretty well.
It's a bit hard to get an overview. But I did spot several uses of owning raw pointers and switching from those to managed ones. I'd say that's exactly the point I was trying to make.
I understand that being a Microsoft employee there's always am incentive to promote Microsoft initiatives, but I feel that it's disingenuous to describe this sort of initiative in C++ circles as "TypeScript for C++" when C++ originated from "C with classes", which is the same transpiling-based approach but without Microsoft marketing attached to it.
Also, I think that promoting a new programming language as "TypeScript for C++" is a marketing move to try to force their way into a market share in spite of the language's merits.
The only way typescript is going away is if JS essentially incorporates it.
The only things people don't like about it is that you need some tooling and a build step.
However, with its popularity, ts is built in to a lot of things, so the tooling usually isn't a big burden (particularly to get started).
And it's viable in a lot of cases these days to forgo the build step and instead use JS with typescript type annotations. (That might be what you heard about. That's really still using typescript, though.)
There are plenty more examples of why people tried typescript and went back to javascript. It's definitely not only because of a build step, but also "type gymnastics", and other code bloat.
Claiming "many people" and proceeding to link to 4 posts that use Svelte as an example is a bit silly considering Rich Harris has said, numerous times, that the decision in Svelte's codebase should NOT be taken as advice for what to do in your codebase, because their decision only pertains to a very specific set of circumstances.
I was countering the previous comments assertion that "the only reason" people don't like typescript was because it has a build step, and your comment doesn't prove me wrong.
Typescript has a lot of momentum on all fronts and is almost certainly the future of Javascript.
The big problem is that it still requires transpilation, which can set up all sorts of shitty traps. Not to mention that the Javascript ecosystem in general is a horrendous mess, which Typescript on its own can’t fix.
Yet, most of the code people write has statically defined types.
Anyway, the whole point of JavaScript is that it runs on the browser. Outside of that, it has no strong points. Even though most of them are not weak enough to immediately abandon the language, its type system is one of the weakest.
WebAssembly is not without tradeoffs either. I'm not an expert, but it's often heavier due to bundling the native language's stdlib; it's annoying to interop with the browser environment because that's still JavaScript-tailored; it's also just hard to write code that can be compiled into WASM (e.g. in Rust, it needs to be `#[no_std]`)
With possibly the exception of Typescript, I agree. Practically all the tools commonly being thrown at JavaScript today are destroying the developer experience and causing more problems that begat even more tools. It's easier than ever to use plain JavaScript in ways that use proper scoping, but so many developers today speak as if this is impossible ("it just doesn't scale bruh"). That's how you know you're talking to someone who didn't write software before ~2011. It's fine if someone prefers to use said tools, but the idea that more tools are are necessary is an example of poor engineering.
There a very limited number of high profile, very niche projects that have switched to types via comments to skip transpile time. (IIRC, a super low level part of the Deno engine.) This does not represent the industry moving away from TypeScript.
A couple of relatively high profile projects stopped using it. Meanwhile across most industries companies use it, I almost never encounter vanilla JS projects these days
That seems like missing the point of the article. What I understand the author to mean with Dart plan vs Typescript plan is the way these languages approached evolution of the base language (JavaScript).
Dart aimed to replace JavaScript completely and isn’t very compatible with it, leading to issues like not being able to leverage the existing library ecosystem. While the Typescript approach enhances the base language instead of replacing it and is still compatible with existing libraries.
When looking at language adoption, the Typescript approach seems to have worked a lot better than the Dart approach. If it wasn’t for Flutter, Dart would probably be irrelevant by now and Typescript is now pretty much everywhere where JavaScript is.
Another successful-ish example of the Typescript plan is Kotlin, which was originally designed as an improved Java, fully compatible with the existing ecosystem.
So I can see where the author comes from when trying to do the same thing for C++.
Herb would like to associate Cpp2 with Typescript (which is generally considered to have succeeded) and other 2022 C++ Successor Languages with Dart (not so much). Herb emphasises the way in which what he's done is like Typescript and de-emphasises ways in which it's entirely unlike Typescript.
I guess that's smart positioning. But a big problem Herb has is that the real alternative isn't any of those 2022 C++ Successor Languages. Ultimately your project will decide whether to stick with C++ or go to a language like Rust or maybe Swift. "Alternatives to C++" that might be finished some time in the next five years are irrelevant to that decision. It's like arguing that you're the best Chicago-style pizza joint in Naples. Who cares?
And when comparing against Rust or Swift, we're back to Herb's ten year head start problem also mentioned in this talk. Rust is what, 18 months away from its tenth anniversary of Rust 1.0? Swift is even closer.
The problem with your argument is that swift is not, and cannot be, a replacement for C++. At best it can be an alternative for some programs, and on macOS only; for the rest of the world it's not very useful. And rust is great but the thing it lacks here is a great C++ interop story, the way TS has with JS and kotlin has with java. There's not really any language with that today (maybe D?) that also has a compelling story of being modern and safer than C++.
In that sense, cpp2 might get to the point of being usable while having perfect interop with C++ (as in, you can use any C++ library trivially, and you can mix it with C++ in a codebase on a per file, maybe even within files, easily) before Rust or swift or Carbon.
It's not really a "web technology" at all. It's a language which transpiles to standard JavaScript. The only real web-geared parts of TypeScript are the included (optional) DOM typings.
I would say this is largely true. C++ is incredibly fun and incredibly hard to wrangle. I think it's one of the most interesting ecosystems out there and you would have to do a lot of convincing to get me to start a new project in it. Everybody says they're going to be more disciplined this time, but after enough time, they remember the not-so-fun parts of the language. I'm not one of those freaks that has a fanatical drive to rewrite existing C++ software in Rust, but I do think it's something that's best avoided if you can help it.
As far as I recall, all compiled languages use untyped target languages. The only downside with TS/JS is that the JS implementation loses an optimization opportunity by not being aware of the TS-checked types.
I think probably the Java bytecode is type aware ? But yes, in general AOT compiled languages result in machine code, and so it isn't type aware after transformation.
> Type annotations for dynamically-typed languages is just a bad idea.
Yet, as with the parent comment, you haven't divulged your reasoning behind this statement. If you're going to make such a broad statement, at least place your rationale beside it.
> JavaScript's only reason for existence is web browsers.
JavaScript has seen broad adoption throughout the industry, for servers (Node.js, Deno, Bun), IoT (DeviceScript), browsers (duh) and mobile apps (NativeScript, React Native, etc.). It has its weak points, but downright dismissing it (again, without any rationale) is unfair and disingenuous IMO.
It breaks the whole point of typing being dynamic to begin with.
You don't build static typing on top of dynamic typing; it's silly. You do it the other way around.
If you're going to be outside of the browser, just use Python, which is not only somewhat similar but also the world's most popular programming language.
I sort of agree: As a C++ developer I have no idea what TypeScript or Dart really are... but it's Herb Sutter so I'll give him my time. He's surely earned that at the very least?
The alternate approach (which is extremely popular unfortunately), is to throw it all out and rebuild everything from scratch. I guess it's fun and exciting, which attracts developers, but it takes a long time to achieve any level of maturity and are hard to sustain (the people who are in it for the fun and excitement will move on before too long).