D and Go are not trying to provide us with a better C, because they don't share the concept of, a language that translates straightforward to assembler, but still is designed so that you can mount good abstractions with it.
This is what you need for kernel programming, embedded systems, and a good deal of system programming as well.
At the same point C is one of the top-used languages in the world, with a vast code base, and yet almost all the attempts in the languages area are about much higher level or exoteric stuff. Not that I don't want that kind of research, but it's bizzarre that no one is focusing where a lot of meat is.
I bet this better C will arise in a few years at max, and it is not going to come from academia. Times are mature apparently, there is even a C conf this year ;)
I take this statement to mean: Academia is failing us by not providing us with the next generation C. If that was the intent, I think it's not a fair jab.
Academia IS actively investigating low-level languages: ATS, Cyclone, and typed assembly are all examples. But, it's not academia's job to create widely-used languages and tools. Sometimes they do create something that achieves wide use, but that's mostly a side benefit. Their job is to generate and evaluate useful "idea nuggets" that can inform future system builders and to train students to do novel systematic research.
There seems to be a common meme that academic CS is "out of touch" with what real developers need. I think it's mostly unfair: while the world always needs a slightly better web framework, compiler, language, etc., that's not the goal of academic CS; they aim to fertilize the ground with ideas that facilitate the growth of more big and small ideas.
I don't want to say that academia is failing or should responsible of giving us this "better C", just where I guess we should look to see "new C" coming is not there. However I think that academia may have a role in this matter, that is, to study the interaction between the programmer and the programming language about getting things done. This is probably done (but I can't remember famous recent studies about this) and could provide useful hints about how we can incrementally build / modify C to get a more reliable and less bug prone language.
However ultimately "new C" will be designed by one or two guys at max, as it always happened with this kind of practical programming languages.
D and Go are just following that thread.
Eventually all mainstream OS will have such type of languages and C will be legacy.
Microsoft is already slowly doing that by dropping support for C later than C90, and adding COM based API as the default Windows API. Now mainly for Metro, perhaps for the complete OS in later versions.
Apple as well, by adding reference counting and GC on their systems language, Objective-C.
C only got spread thanks to UNIX, and brought upon us the wrath of buffer overruns and dangling pointers security exploits.
We need better languages for doing systems programming.
Additionally, GC is not strictly necessary to solve buffer overruns and dangling pointers. Region-based memory management is an alternative for many use cases.
And about GC in kernel, of course you can. The issue is that most efficient GC needs giant locking and giant locking in kernel is something that you don't want.
In fact you can really live without GC. The burden of integrating it in kernel space out-range the benefit of using it.
> Eventually all mainstream OS will have such type of languages and C will be legacy.
That's just speculation. Personally I have high hopes for languages with optional-only GC, and much stricter memory management principals. Ala, Rust.
Don't forget that UNIX was once upon a time a research operating system, and took a few decades to have industry impact.
New ideas take time to mature.
I imagine that the kernel can easily compiled with the Objective-C or Objective-C++ compiler, if Apple so wishes.
All languages require runtimes, even C. The thing is that C as high level assembler it is, uses the operating system as its runtime.
The C language is also part of Objective-C, in a similar vein as C++ also supports most of the C89 features with small exceptions.
So it is possible to have the Objective-C runtime compiled using the C subset of Objective-C. This is known as compiler bootstrapping.
No, admittedly, I don't, at any more than a basic level. But you're still wrong.
Try compiling, linking, and running an ObjC program without the ObjC runtime. See how that fails. See how long it takes you to write a minimal runtime that can run your program in user-space, and then kernel-space. It won't be quick.
Try compiling, linking, and running a plain-vanilla C program without the "C runtime" (I guess you mean a combination of libc, libgcc, and crt.o, basically what you get when you pass -nostdlib -nostartfiles to gcc). Yep, that'll fail too. Then get it to compile and run anyway. Not that hard.
That's* the difference I'm talking about.
At the very least, kernels like Linux and Mach/Darwin already have kernel-friendly replacements for the libc functionality they commonly use. Doing something similar for ObjC would be time consuming. Certainly it isn't impossible, but note that I never said that: I merely pointed out it wasn't easy.
The same applies if you would be using the C subset of a C++ compiler.
I am old enough to remember the days when UNIX and C were funny research projects. Look at them now.
So I leave you with a list of research projects for the boring days, when you don't have anything to read.
I don't doubt that C is not the systems programming language of the future. But it's not going to be done in a system that's based around automatic GC either.
Additionally, I find it interesting to note, that if you had in fact been very familiar with all the work you mention. You should actually have noted that many of these systems go through significant effort to sidestep the GC.
It not just talk about some guys showing up papers at OS geek conferences.
The Oberon Native for example, was for long time the main operating system at the operating system research department at ETHZ. Most researchers used it as their daily OS for all tasks you can think of.
The GC was done at kernel level. Besides Oberon code, there is just a little bit of assembly for the device drivers and boot loader.
That's a ridiculous dismissal to make; we live in a world where hand-written assembly is considered a badge of honor, where people prefer to patch up a 40-year-old dinosaur rather than make something new, where people to go ridiculous lengths to avoid using a mouse simply because Unix didn't have mouse support in 1973. Is it any wonder that we're not using anything better?
Even in video games where this held true for longer, this is generally no longer the case.
> where people prefer to patch up a 40-year-old dinosaur rather than make something new
Also not true, Generally engineers prefer to make new things, The problem is that, creating new products from scratch to replace old ones ends up almost always being more effort than expected. Failure, is what causes patching the dinosaur the better approach, rather then engineering desires, Netscape is a classic example.
> where people to go ridiculous lengths to avoid using a mouse simply because Unix didn't have mouse support in 1973
I don't know what world you live in, because in the world I live in, computers with mice are much more popular than the alternative.
The fact that none of these systems are in usage is true, you can make excuses for them, but the fact of the matter is that there is a lot of smart people trying, and very little success. There is little evidence to support that this is a superior approach.
There's no such thing as language copyright.
While NeXT did not exactly created Objective-C, it acquired a license to be able to create their own implementation, which became the official Objective-C compiler.
Plus, lets see what is the outcome of the Oracle vs Google trial regarding copyright.
Implementation -- yes, the language itself -- no.
What about languages like Cyclone, decac, Clay?
LOCAL STRING copyto(endch)
REG CHAR endch;
REG CHAR c;
WHILE (c=getch(endch))!=endch ANDF c
DO pushstak(c|quote) OD
IF c!=endch THEN error(badsub) FI
Why not go all the way with HCTIWS, when you've already abandoned your sanity to Cthulhu?
Similar efforts (the linguist at DRI who did a #define of totally new control structures . . . in Russian, though he wasn't Russian, it was just a lark) are why we can't have nice things.
Isn't that just so much clearer and nicer-looking than ".rs" or ".java" or ".g"? It's really the fine aesthetic points like this that can make or break a language.
One suggested improvement: Rust's extension should preferably be .® to jive with the language's official logo. And with the advent of emoji, we can give Perl a file extension of .🐪 (Unicode Character 'DROMEDARY CAMEL' (U+1F42A)). Perl 6 can get the bactrian camel instead.
Extensions and filenames can be made not to be so central, which I'd consider a more reasonable development over superficial silly hacks like that. This is IDE domain, definitely in 2012 and for years to come.
If you want to render your extensions to funny happy Unicode symbols in your specific purpose file manager, then fine by me. Actually storing filenames like that would just make you a number of new enemies.
There's no typed macros
There are inlined functions...
While the type system of C is basically size based, a lot of types have an ambiguous size (int for example)
Use stdint.h definitions.
Having said that, I would like namespaces in C!
I'd also like to point out that setting individual bits in an integer is not usually atomic. So the "syntactic sugar" here could be pretty error-prone (and I can't say it was very high on my list of "features I wish C had").
You should be coding to(or with) the OS you're working on. If you're doing multiplatform stuff, then #defines with a naming scheme work, and are already the standard practice.
It's not impossible to do right though, C++ has "allocators" parameter templates to do exactly that (there's a default implementation that you can "override" with a custom class).
I've been trying to use (or rather, bend) rust for kernel development lately, and memory allocation was my biggest limitation so far (including stack allocation). Definitely looking forward to future improvement on the language, it looks very promising.
"I'd like to see a 'runtime-less Rust' myself, because it'd be great if we could implement the Rust runtime in Rust. This might be useful for other things too, such as drivers or libraries to be embedded into other software. (The latter is obviously of interest to us at Mozilla.) Rust programs compiled in this mode would disable the task system, would be vulnerable to stack overflow (although we might be able to mitigate that with guard pages), and would require extra work to avoid leaks, but would be able to run without a runtime.
If anyone is interested in this project, I'd be happy to talk more about it -- we have a ton of stuff on our plate at the moment, so we aren't working on it right now, but I'd be thrilled if anyone was interested and could help."
Example based on the syntax from the article:
x[0:6] = 42; // Set six bits starting with lsb 0 to 42
In fact, it doesn't reach top priority since we can do without for now (we have bitfields.) The issue with slicing is more a question of design: does slicing need to be constant expression or should we authorize more expression power (and then how could we implement that and how could we check it … )
Anyway, I'm looking for a clever way to define values accessed in a none uniform ways.
Integer as bit array was the simplest idea to test (and seems useful to me.) Now the compiler has almost everything I need to implement easily that kind of syntactic sugars.
After reading the wikipedia page about it, it doesn't seem to be available for any common platform.
How would it perform as a system programming language today (compared to C) ?
YALLBOSCAC? How about a language named YALL and boscc as the name of the compiler?