Hacker News new | past | comments | ask | show | jobs | submit login

Sean (the author of Circle) is an impressive guy. He started pursuing this work at about the same point several of the "C++ Successor Languages" were created, but although all of them claimed to be about solving this problem, especially when first announced, they actually don't have a solution unlike this Circle work. Let me briefly enumerate:

Val (now HyLo) says it wants to solve this problem... but it doesn't yet have the C++ interop stuff, so, it's just really a completely different language nobody uses.

Carbon wants to ship a finished working Carbon language, then bolt on safety (somehow) but, only for some things, not data races for example, so, you still don't actually have Rust's Memory Safety

Cpp2 explicitly says it isn't even interested in solving the problem, Herb says if he can produced measurements saying it's "safer" somehow that's good enough.

It's interesting how many good ideas from Rust just come along free for the ride when you try to do this, do you like Destructive Move? Good news, that's how you make this work in Rust so that's how Circle does it. Exhaustive Pattern Matching? Again, Circles does that for the same reason Rust needs it.

It is certainly true that this is not the Only Possible Way to get Safety. It would be profoundly weird if Rust had stumbled onto such a thing, but "Let's just copy the thing that worked in Rust" is somehow at once the simplest possible plan that could work and an astonishing achievement for one man.




Right now, Circle looks like the only Typescript like evolution path for existing C++, with a production quality compiler.

Unfortunately WG21 seems to have some issues with any ideas coming out from Circle, going back to its early days, and I don't see them being willing to adopt Sean's work.

Which is really a pity, as he single handled managed to deliver more than whole C++ compiler folks, stuck in endless discussions about the philosophical meaning of the word safety.

Maybe at least some vendor in high integrity computing domain does adopt his work.


> Unfortunately WG21 seems to have some issues with any ideas coming out from Circle, going back to its early days, and I don't see them being willing to adopt Sean's work.

What reasons? Are those valid?


It was due to the metaprogramming capabilities, due to how Circle enables to use full C++ at compile time instead of constexpr/constinit/constval, David Sankel has a talk where he jokes with the WG21 decision process that was behind it,

"Don't constexpr All the Things", from CppNow 2021,

https://youtu.be/NNU6cbG96M4?t=2045


Well, GCC supports -fimplicit-constexpr these days: https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Optio...


from what I understand, there's a price Sean is asking for his work which no one is willing to pay at this moment.


I'm very confused by the love for circle - there's a GitHub that hasn't been updated for 7 months that doesn't have a license.

When I first heard about it, a lot of the ideas seemed interesting, but are there users of it?

I think my biggest question is "what is the goal here?"

For carbon they're pretty explicit that it's a research prototype. If anything is to come if it, it will need to be usable at Google's scale (they have real issues with build times, they build from head so abi compatibility isn't required, etc)

Herb wasn't really designing cpp2 as a successor language so much as a playground to understand what features might make sense to propose for adoption in c++.

What is circle? It's more than just some project, but the ideas haven't been adopted in cpp and the compiler repository isn't being updated


Circle is not open source, that’s correct.


I think that's automatically a dead end, then. People have inceasingly abandoned closed source compilers, they create a huge risk if the maker decides to stop maintaining it. Most languages that people pick up have an open source software implementation.


In most cases, I’d probably agree with your point here, but in this case, I think you’re wrong. If Circle can truly accomplish its stated goals, the value proposition of a memory-safe superset of C++ is ginormous. Lots of companies with critical software written in C++ won’t care that Circle isn’t open-source as long as it ticks all of their boxes (certifications, audits, etc.) and they have a strong enterprise support story. This isn’t your average project.


Even Ada is not that closed. But yeah, Ada proves that some will pay, a lot.


Enough that there are still seven vendors selling Ada compilers.


I hope that's right, but I don't really get it.

From a practical standpoint, when you run your tests with msan and asan (and if you have decent test coverage) I'm not convinced of the benefits of memory safety that eg rust provides

Supposing that it is worth it though, why not migrate to rust? I have a friend starting a startup / consultancy to do just that, and it makes more sense than using circle. Even carbon says "if you can use rust, use rust not carbon"


The problem with testing is that you need a very large number of tests to cover a moderately complex piece of software: you need to test most branches obviously, but often you need to test combinations of behaviours such that 100% line testing is still not even close to enough testing.

The advantage of compile time verification is that you can prove that certain paths and behaviours are impossible and don't need to be tested. This reduces the space of required tests combinatorially. We all already do that: no one bothers testing (for example) that standard library functions work correctly in their own code base. In Rust (and strongly typed languages in general) there are entire classes of test and assertions that aren't needed anymore.


> Supposing that it is worth it though, why not migrate to rust? I have a friend starting a startup / consultancy to do just that, and it makes more sense than using circle. Even carbon says "if you can use rust, use rust not carbon"

A key selling point for Circle is the “superset of C++” aspect. What you’re proposing—while feasible for startups—is entirely unreasonable for larger companies, especially those with existing [usually massive] C++ codebases. With enough evangelizing, you might get them to agree to start using something like Rust in new, smaller, and internal projects. However, suggesting a rewrite of anything mission-critical that can directly impact the bottom line to an entirely different language, ecosystem, behaviors, guarantees, community, etc. is one of the most scary proposals you can make to a company. Are the existing engineers proficient in this new language? How long will it take to ramp up? Are there any new costs (not just financial) to adopting this new language (hint: there always are)? Are there legal concerns? How long will the rewrite take (hint: likely longer than the engineers think)? The list goes on. It’s simply too risky a proposition.


I don't believe your claim about migrating to rust being untenable. Primarily because I worked at Google where I've seen them migrate the codebase consistently through years of development and language updates. If an enormous company like Google can switch CPU architectures, change their numeric types, change out their hash maps, etc, then yes you can migrate to using a whole new language. (If Google thought this was impossible, why bother with carbon?)

I say this confidently because I've worked directly with people doing this and seen their work.

I will claim one better and say: you can migrate to idiomatic rust (with help from some custom libraries)

Should companies do this? Depends on the industry and the needs for the kind of safety rust provides


The greatest value of Circle isn't in the compiler and tooling, it's in the design. Designing a C++ superset with Rust-like safety properties is hard. Once Circle gains traction, there's 110% chance that it gets reimplemented elsewhere.

That said, I'm wholly uninterested in any proprietary language, too.


It won't gain traction because it's not open source


Sean has said he would consider open sourcing it later, but doing so now would defeat the purpose of the project. He makes a lot of progress simply because he's but watching all the issues and prs in GitHub.


There are plenty of open source projects that aren’t developed in the open and just throw a source tarball over the wall periodically. Lua for example.


Sure, but as soon as you put out source you're going to have suggestions/comments/criticisms/PRs/etc.


That was supposed to say he's not watching.


I think it's more complicated than that, but I agree that it's a factor that would give some folks pause for currently adopting Circle.


Isn't MSVC closed source?


Yes, but there are several arguements in favor of MSVC that don't apply in general.

MSVC is for and from the same people as Windows, Windows is large enough and popular enough that it isn't going away (Microsoft would have to die - and even if that happens you can bet organizations the US government would take over Windows), thus betting that MSVC won't go away is safe enough. If it doesn't work out your company is in trouble but so is everyone else. It is when you are betting on something less popular that you get into potential trouble as the thing you depend on can be canceled.

Mingw isn't as good as MSVC, but if forced you could use it instead. Which means you have options and so a bet on closed source has a understandable worst case risk that is a lot better.

Small close source projects like Circle C++ should be used either as a small experiment - if this fails you can rewrite everything in something open source in a few months and so the risk is low. There is one other common option (though I don't know what circle c++ offers): you can bet on circle C++ after your lawyers get a contract that if it becomes unavailable you get source code and rights so your worst case is you maintain it yourself. These contracts in business are made all the time, and so a good lawyer will have no problem getting the risk terms into a contract.


There is still a healthy C and C++ commercial compiler market, specially in embedded, high integrity computing and games.


Yes, but the whole value proposition of Circle is rewriting existing C++ libraries in safe C++. Because if not, you could "just" use Rust and call them from there. And without an open source compiler that won't happen, even if it would be free as in beer.


Regulation and certificed compilers also help to reach decisions.


No, the value is you can use existing C++ with circle. Rust might be a great language, but if I have several million lines of C++ and I just want to work with a std::vector<MyCppClass> rust will have a lot of trouble.


>Carbon wants to ship a finished working Carbon language, then bolt on safety (somehow) but, only for some things, not data races for example, so, you still don't actually have Rust's Memory Safety

I'm not sure this is correct. As I understand it, Carbon's plan is to add a borrow checker like Rust's.

From a recent talk[0][1] by one of the lead developers:

>Best candidate for C++ is likely similar to Rust’s borrow checker

[0] slides: https://chandlerc.blog/slides/2023-cppnow-carbon-strategy/in...

[1] relevant timestamps:

https://youtube.com/watch?v=1ZTJ9omXOQ0&t=1h31m34s

https://youtube.com/watch?v=1ZTJ9omXOQ0&t=1h9m49s


Chandler has explicitly said that he doesn't see a reason to solve data races.

The borrow checker isn't enough on its own to solve this in Rust, Sean explains (probably deeper in Circle's documentation) that you need to track a new property of types which is infectious, Rust does this with the Send and Sync traits.


"Chandler has explicitly said that he doesn't see a reason to solve data races."

Er, the slide title says that solving this is highly desirable, just not a strict requirement for security purposes.

Not sure how that's the same as "doesn't see a reason to solve data races". I see lots of reasons. I just think it is possible to achieve the security goals without it.

FWIW, I'm hopeful we'll end up including this in whatever model we end up with for Safe Carbon. It's just a degree of freedom we also shouldn't ignore when designing it.


> Not sure how that's the same as "doesn't see a reason to solve data races". I see lots of reasons. I just think it is possible to achieve the security goals without it.

If Carbon doesn't prevent data races, then how exactly will it achieve memory safety? Will it implement something like OCaml's "Bounding Data Races in Space and Time"? [0]

If we ignore compiler optimizations, the problem with data races is that it may make you observe tearing (incomplete writes) and thus it's almost impossible to maintain safety invariants with them. But the job of a safe low level language is to give tools for the programmer to guarantee correctness of the unsafe parts. In the presence of data races, this is infeasible. So even if you find a way to ensure that data races aren't technically UB, data races happening in a low level language surely lead to UB elsewhere.

Ultimately this may end up showing as CVEs related to memory safety so I don't think you can achieve your security goals without preventing data races.

[0] https://kcsrk.info/papers/pldi18-memory.pdf


It is possible to have a memory model that blocks word tearing without full logical data race prevention. Java does it, although it benefits from not having to deal with packed types etc.


I'm not sure, but I don't think this is the case. https://openjdk.org/projects/valhalla/design-notes/state-of-...

> Tearing

> For the primitive types longer than 32 bits (long and double), it is not guaranteed that reads and writes from different threads (without suitable coordination) are atomic with respect to each other. The result is that, if accessed under data race, a long or double field or array component can be seen to “tear”, where a read might see the low 32 bits of one write, and the high 32 bits of another. (Declaring the containing field volatile is sufficient to restore atomicity, as is properly coordinating with locks or other concurrency control.)

> This was a pragmatic tradeoff given the hardware of the time; the cost of atomicity on 1995 hardware would have been prohibitive, and problems only arise when the program already has data races — and most numeric code deals with thread-local data. Just like with the tradeoff of nulls vs. zeros, the design of primitives permits tearing as part of a tradeoff between performance and correctness, where primitives chose “as fast as possible” and objects chose more safety.

> Today’s JVMs give us atomic loads and stores of 64-bit primitives, because the hardware makes them cheap enough. But primitive classes bring us back to 1995; atomic loads and stores of larger-than-64-bit values are still expensive, leaving us with a choice of “make operations on primitives slower” or permitting tearing when accessed under race. For the new primitive types, we choose to mirror the behavior of the existing primitives.

> Just as with null vs. zero, this choice has to be made by the author of a class. For classes like Complex, all of whose bit patterns are valid, this is very much like the choice around long in 1995. For other classes that might have nontrivial representational invariants, the author may be better off declaring a value class, which offers tear-free access because loads and stores of references are atomic.

The key here is the last phrase: "For other classes that might have nontrivial representational invariants, the author may be better off declaring a value class, which offers tear-free access because loads and stores of references are atomic.". This implies that to avoid tearing you would need to introduce a runtime cost to every access, which is unacceptable for a language aiming to replace C++.

And you can assume that a low level language like Carbon has a lot of types with nontrivial invariants. Just like in Java, data races WILL make one thread observe a partially written value in another thread.

In the presence of data races, you can only avoid tearing when writing to fields whose size is smaller or equal than word length (typically, 64 bits). If all you have are small primitives or pointers, then it might work. But Carbon can't abide by this restriction either.


Thanks for clarifying that point. It's worth pointing out that the safety strategy doc[0] mentions that

>A key subset of safety categories Carbon should address are:

>[...]

>Data race safety protects against racing memory access: when a thread accesses (read or write) a memory location concurrently with a different writing thread and without synchronizing

But then later in the doc it says

>It's possible to modify the Rust model several ways in order to reduce the burden on C++ developers:

>Don't offer safety guarantees for data races, eliminating RefCell.

>[...]

>Overall, Carbon is making a compromise around safety in order to give a path for C++ to evolve. [...]

One could read this as saying that guaranteed safety against data races is not a goal. Perhaps this doc could be reworded? Maybe something like "Carbon does not see guaranteed safety against data races as strictly necessary to achieve its security goals but we still currently aim for a model that will prevent them."

[0] https://github.com/carbon-language/carbon-lang/blob/trunk/do...


You're right. In fact it was in the previous slide[0] from that same talk. Thanks for pointing that out.

[0] https://youtube.com/watch?v=1ZTJ9omXOQ0?t=1h8m19s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: