Hacker Newsnew | past | comments | ask | show | jobs | submit | pjmlp's commentslogin

I love the Java/Kotlin userspace, even if it is Android Java flavour, and the our way or the highway attitude to C and C++ code, instead of yet another UNIX clone with some kind of X Windows into the phone.

In the past I was also on Windows Phone, again great .NET based userspace, with some limited C++, moving into the future, not legacy OS design.

I can afford iPhones, but won't buy them for private use, as I am not sponsoring Apple tax when I think about how many people on this world hardly can afford a feature phone in first place.

However I also support their Swift/Objective-C userspace, without being yet another UNIX clone.

If the Linux phones are to be yet another OpenMoko with Gtk+, or Qt, I don't see it moving the needle in mainstream adoption.


Given that most modern languages are an half implementation of Lisp, with exception of C derived languages, in GC, JIT, JIT caches, REPL, dynamic code loading, IDE tooling, and how this AI wave is driven by the language that Peter Norvig coined as being an acceptable Lisp in 2010, I would say it still suceeded.

WASM is a solution looking for a problem that most people don't care.

ASP.NET MVC alongside JS/TS frameworks does the job just fine, as does Spring, Quarkus and co.


Yet Cloudflare happened.

SIMD code.

And if you are going to point out compiler extensions, they are extensions exactly because ISO C cannot do it.


Rust doesn't make sense for web development, any compiled language with automatic memory management, and value types, has much better tooling and ecosystem.

Use it where it is ideal, system programming level tasks where for whatever reasons automatic memory management is either not possible, or not wanted for various reasons.


You mean what Ada and Modula-3, among others, already had before it came to C99?

Who cares who had it first, what matters is who has it, and who doesn't...

Apparently some do, hence my reply.

Then why not even better, K&R C with external assembler, that is top. /s

> external assembler

Is that supposed to exacerbate how poor that choice is. External assembly is great.


When talking about K&R C and the assembler provided by UNIX System V, yes.

Even today, the only usable Assemblers on UNIX platforms were born in PC or Amiga.


Never, you can already do this with RAII, and naturally it would be yet another thing to complain about C++ adding features.

Then again, if someone is willing to push it through WG21 no matter what, maybe.


C++ implementations of defer are either really ugly thanks to using lambdas and explicitly named variables which only exist to have scoped object, or they depend on macros which need to have either a long manually namespaced name or you risk stepping on the toes of a library. I had to rename my defer macro from DEFER to MYPOROGRAM_DEFER in a project due to a macro collision.

C++ would be a nicer language with native defer. Working directly with C APIs (which is one of the main reasons to use C++ over Rust or Zig these days) would greatly benefit from it.


Because they are all the consequence of holding it wrong, avoiding RAII solutions.

Working with native C APIs in C++ is akin to using unsafe in Rust, C#, Swift..., it should be wrapped in type safe functions or classes/structs, never used directly outside implementation code.

If folks actually followed this more often, there would be so much less CVE reports in C++ code caused by calling into C.


> Because they are all the consequence of holding it wrong, avoiding RAII solutions.

The reason why C++ is as popular as it is is in large part due to how easy it is to upgrade an existing C codebase in-place. Doing a complete RAII rewrite is at best a long term objective, if not often completely out of the question.

Acknowledging this reality means giving affordances like `defer` that allow upgrading C codebases and C++ code written in a C style easier without having to rewrite the universe. Because if you're asking me to rewrite code in a C++ style all in one go, I might not pick C++.

EDIT: It also occurs to me that destructors also have limitations. They can't throw, which means that if you encounter an issue in a dtor you often have to ignore it and hope it wasn't important.

I ran into this particular annoyance when I was writing my own stream abstractions - I had to hope that closing the stream in the dtor didn't run into trouble.


You can use a function try block on the destructor, additionally thanks to C++ metaprogramming capabilities, many of these handler classes can be written only once and reused across multiple scenarios.

Yes, unfortunely that compatibility is also the Achilles hill of C++, so many C++ libraries that are plain old C libraries with extern "C { .... } added in when using a C++ compiler, and also why so many CVEs keep happening in C++ code.


If I'm gonna write RAII wrappers around every tiny little thing that I happen to need to call once... I might as well just use Rust and make the wrappers do FFI.

If I'm constructing a particular C object once in my entire code base, calling a couple functions on it, then freeing it, I'm not much more likely to get it right in the RAII wrapper than in the one place in my code base I do it manually. At least if I have tools like defer to help me.


if you do it once - why do you care about "ugly" scope_exit? btw, writing such wrappers is easy and does not require a lot of code.

What do you mean with '"ugly" scope_exit'?

Do you mean why I care that I have to call the free function at every exit point of the scope? That's easy: because it's error prone. Defer is much less error prone.


Not to mention that the `scope_success` and `scope_failure` variants have to use `std::uncaught_exceptions()`, which is hostile to codegen and also has other problems, especially in coroutines. C++ could get exception-aware variants of language defer.

What C++ really needs is an automated way to handle exceptions in destructors, similar to how Java does in its try-with-resources finally blocks.

While not automated, you can make use of function-try-blocks, e.g.:

    struct Example {
        Example() = default;

        ~Example()
        try {
        // elease resources for this instance
        } catch (...) {
            // take care of what went wrong in the whole destructor call chain
        }
    };
-- https://cpp.godbolt.org/z/55oMarbqY

Now with C++26 reflection, one could eventually generate such boilerplate.


What I’m thinking of is that the C++ exception runtime would attach exceptions from destructors to any in-flight exception, forming an exception tree, instead of calling std::terminate. (And also provide an API to access that tree.) C++ already has to handle a potentially unlimited amount of simultaneous in-flight exceptions (nested destructor calls), so from a resource perspective having such a tree isn’t a completely new quality. In case of resource exhaustion, the latest exception to be attached can be replaced by a statically allocated resources_exhausted exception. Callbacks like the old std::unexpected could be added to customize the behavior.

The mechanism in Java I was alluding to is really the Throwable::addSuppressed method; it isn’t tied to the use of a try-block. Since Java doesn’t have destructors, it’s just that the try-with-resources statement is the canonical example of taking advantage of that mechanism.


I see, however I don't envision many folks on WG21 votting that in.

https://juliahub.com/case-studies

Most "Python" applications are actually bindings to C, C++ and Fortran code doing the real work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: