Hacker News new | past | comments | ask | show | jobs | submit login

That's not quite right.

They jump through hoops so that C programs and the programs of all the other languages designed to run in the same execution model run as fast as possible. It's not to indulge C programmers but to support the vast body of existing software.

Hardware engineers need optimization targets just liker anyone else, and it's an eminently reasonable one.

Also, it's not like loads of improvements not related to the C execution model haven't been made. Demonstrably, the C model is not holding us back.

Anyway, if you want to move on from the C execution model, great. Now you need to introduce a practical transition plan, which should include such details as how and why we should rewrite all of the existing performance sensitive software designed to work in the old model.




the C execution model has demonstrably held us back because the practical transition plan for migrating existing C codebases would be automated translation (of the C Turing tarpit) into sane functional/formal models


Demonstrably?


Sure. Foreign function interfaces (FFIs), also known as ctypes, are probably the best thing to point to as glaring examples of how C holds us back. Once we start there, then a spidering network of technologies are highlighted, all surrounding C, from dynamically-linked code objects (DSOs and DLLs) to the required usage of '%' characters for printf-style string formatting via templating.

We need to move beyond memory-unsafe paradigms already. Assembly intrinsics aren't great but they're certainly better than C.


Sorry, who requires % formatting?

Never use it, myself, except configuring the date format in my menu bar.


In a way this already happened, with GPU programming, which has a more realistic model of what modern hardware is (explicit memory hierarchy for example).

Because the speed advantage is so huge, people went through the pain of learning this new model and redesigning algorithms to better fit it.


Modern hardware is multiple different things. GPU programming does not have a realistic model of what CPU hardware is like, because despite CPUs having more cores and SIMD, they are also very cache-oriented and branch-prediction-oriented, while GPUs aren't that at all.


And yet the most common language used for GPU programming is a C++ dialect.


And it took 10 years for NVidia to make it match the ISO C++ memory model introduced in C++11, as per NVidia talk at CppCon 2019.


And the C++11 memory model was added so that the language would map to the execution model of the underlying hardware, not viceversa, refuting the parent.


Not really, C++11 initally got its memory model from .NET and Java memory models, two mostly hardware agnostic stacks, and then expanded on top of them.


I'd argue that modern GPUs pipelines are quite different from the API programming models they support, especially tile based renderers do lots of funny things while maintaining the illusion that you are running on the abstract OpenGL pipeline.

So even with GPUs you have a similar situation..


I was talking about stuff like CUDA, not OpenGL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: