Hacker News new | comments | show | ask | jobs | submit login
C (Cbang) - A system oriented programming language (epita.fr)
90 points by kalenz 1748 days ago | hide | past | web | 83 comments | favorite

I didn't checked the details, but I would love to see ten attempts every year, trying to solve the same problem.

D and Go are not trying to provide us with a better C, because they don't share the concept of, a language that translates straightforward to assembler, but still is designed so that you can mount good abstractions with it.

This is what you need for kernel programming, embedded systems, and a good deal of system programming as well.

At the same point C is one of the top-used languages in the world, with a vast code base, and yet almost all the attempts in the languages area are about much higher level or exoteric stuff. Not that I don't want that kind of research, but it's bizzarre that no one is focusing where a lot of meat is.

I bet this better C will arise in a few years at max, and it is not going to come from academia. Times are mature apparently, there is even a C conf this year ;)

> I bet this better C will arise in a few years at max, and it is not going to come from academia.

I take this statement to mean: Academia is failing us by not providing us with the next generation C. If that was the intent, I think it's not a fair jab.

Academia IS actively investigating low-level languages: ATS, Cyclone, and typed assembly are all examples. But, it's not academia's job to create widely-used languages and tools. Sometimes they do create something that achieves wide use, but that's mostly a side benefit. Their job is to generate and evaluate useful "idea nuggets" that can inform future system builders and to train students to do novel systematic research.

There seems to be a common meme that academic CS is "out of touch" with what real developers need. I think it's mostly unfair: while the world always needs a slightly better web framework, compiler, language, etc., that's not the goal of academic CS; they aim to fertilize the ground with ideas that facilitate the growth of more big and small ideas.

> I take this statement to mean: Academia is failing us by not providing us with the next generation C. If that was the intent, I think it's not a fair jab.

I don't want to say that academia is failing or should responsible of giving us this "better C", just where I guess we should look to see "new C" coming is not there. However I think that academia may have a role in this matter, that is, to study the interaction between the programmer and the programming language about getting things done. This is probably done (but I can't remember famous recent studies about this) and could provide useful hints about how we can incrementally build / modify C to get a more reliable and less bug prone language.

However ultimately "new C" will be designed by one or two guys at max, as it always happened with this kind of practical programming languages.

I mostly agree, but I would make a small correction: that's not what academic CS is paid for. Jonathan Shapiro didn't stop work on BitC because he ran out of enthusiasm but because he ran out of funding. Habit will only remain a live project while its PIs can keep the ~~grant money~~ spice flowing.

Native Oberon, Spin, Singularity, Home, Inferno are all operating systems written in GC enabled system programming languages.

D and Go are just following that thread.

Eventually all mainstream OS will have such type of languages and C will be legacy.

Microsoft is already slowly doing that by dropping support for C later than C90, and adding COM based API as the default Windows API. Now mainly for Metro, perhaps for the complete OS in later versions.

Apple as well, by adding reference counting and GC on their systems language, Objective-C.

C only got spread thanks to UNIX, and brought upon us the wrath of buffer overruns and dangling pointers security exploits.

We need better languages for doing systems programming.

Note that Apple is abandoning garbage collection for Objective-C.

Additionally, GC is not strictly necessary to solve buffer overruns and dangling pointers. Region-based memory management is an alternative for many use cases.

Yes, but reference counting (ARC) is also automatic memory management.

Reference counting is the low-end of automatic memory management and garbage collection.

And about GC in kernel, of course you can. The issue is that most efficient GC needs giant locking and giant locking in kernel is something that you don't want.

In fact you can really live without GC. The burden of integrating it in kernel space out-range the benefit of using it.

Funny, because Windows uses reference counting on the Kernel.


Interesting that none of those systems are in active usage...

> Eventually all mainstream OS will have such type of languages and C will be legacy.

That's just speculation. Personally I have high hopes for languages with optional-only GC, and much stricter memory management principals. Ala, Rust.

Windows and Mac OS X are, and they are slowly incorporating such changes, as I mentioned.

Don't forget that UNIX was once upon a time a research operating system, and took a few decades to have industry impact.

New ideas take time to mature.

Objective-C isn't used in XNU the kernel that Mac OS X uses, Mach is a mixture of C and C++ (A subset of it actually, known as IOKit).

System programming is more than just the kernel.

I imagine that the kernel can easily compiled with the Objective-C or Objective-C++ compiler, if Apple so wishes.

You "imagine" incorrectly. A functioning Obj-C requirement requires a runtime. Currently that runtime (libobjc) is written in C, and it requires a C library to function. Yes, you can rewrite those portions to use only functions available in kernel-land, but it is by no means "easy" as you suggest.

This just goes to show you don't understand compilers.

All languages require runtimes, even C. The thing is that C as high level assembler it is, uses the operating system as its runtime.

The C language is also part of Objective-C, in a similar vein as C++ also supports most of the C89 features with small exceptions.

So it is possible to have the Objective-C runtime compiled using the C subset of Objective-C. This is known as compiler bootstrapping.

This just goes to show you don't understand compilers.

No, admittedly, I don't, at any more than a basic level. But you're still wrong.

Try compiling, linking, and running an ObjC program without the ObjC runtime. See how that fails. See how long it takes you to write a minimal runtime that can run your program in user-space, and then kernel-space. It won't be quick.

Try compiling, linking, and running a plain-vanilla C program without the "C runtime" (I guess you mean a combination of libc, libgcc, and crt.o, basically what you get when you pass -nostdlib -nostartfiles to gcc). Yep, that'll fail too. Then get it to compile and run anyway. Not that hard.

That's* the difference I'm talking about.

At the very least, kernels like Linux and Mach/Darwin already have kernel-friendly replacements for the libc functionality they commonly use. Doing something similar for ObjC would be time consuming. Certainly it isn't impossible, but note that I never said that: I merely pointed out it wasn't easy.

C hardly requires it's runtime. Objective-C without the runtime is just C, if you write the runtime, GC, and other low-level system components in the C subset of Objective-C you are just writing the kernel in C.

Technically you will be using the Objective-C compiler, so it is still Objective-C even if the syntax looks like C.

The same applies if you would be using the C subset of a C++ compiler.

This is in the original context of how all systems will be written in "GC enabled system programming languages". If the language is not GC enabled due to missing a runtime or w/e then the technicality isn't relevant.

What is relevant, is that universities and some companies seem to have another opinion.

I am old enough to remember the days when UNIX and C were funny research projects. Look at them now.

So I leave you with a list of research projects for the boring days, when you don't have anything to read.












The problem, is that at the end of the day, some code somewhere is going to have to deal with resource allocation. Generally speaking with all the other fluff aside, an operating system, fundamentally manages and multiplexes resources. It's naive to think that resource management would be best done in a language with automatic GC. Somebody

I don't doubt that C is not the systems programming language of the future. But it's not going to be done in a system that's based around automatic GC either.

Read the papers of proven work, instead of stating your beliefs.

You use the word proven as if it means something. The work you list is no more proven in terms of building production systems than any other research work.

Additionally, I find it interesting to note, that if you had in fact been very familiar with all the work you mention. You should actually have noted that many of these systems go through significant effort to sidestep the GC.

It is proven, because groups of people went through the effort of spending time and effort implementing those systems and used them for daily work as well.

It not just talk about some guys showing up papers at OS geek conferences.

The Oberon Native for example, was for long time the main operating system at the operating system research department at ETHZ. Most researchers used it as their daily OS for all tasks you can think of.


The GC was done at kernel level. Besides Oberon code, there is just a little bit of assembly for the device drivers and boot loader.

> Interesting that none of those systems are in active usage...

That's a ridiculous dismissal to make; we live in a world where hand-written assembly is considered a badge of honor, where people prefer to patch up a 40-year-old dinosaur rather than make something new, where people to go ridiculous lengths to avoid using a mouse simply because Unix didn't have mouse support in 1973. Is it any wonder that we're not using anything better?

> we live in a world where hand-written assembly is considered a badge of honor

Even in video games where this held true for longer, this is generally no longer the case.

> where people prefer to patch up a 40-year-old dinosaur rather than make something new

Also not true, Generally engineers prefer to make new things, The problem is that, creating new products from scratch to replace old ones ends up almost always being more effort than expected. Failure, is what causes patching the dinosaur the better approach, rather then engineering desires, Netscape is a classic example.

> where people to go ridiculous lengths to avoid using a mouse simply because Unix didn't have mouse support in 1973

I don't know what world you live in, because in the world I live in, computers with mice are much more popular than the alternative.

The fact that none of these systems are in usage is true, you can make excuses for them, but the fact of the matter is that there is a lot of smart people trying, and very little success. There is little evidence to support that this is a superior approach.

I love Inferno, but the actual kernel is written in C, not Limbo.

Objective-C was not created by Apple.

It has created by NeXT, which was bough by Apple, thus Apple holds the language copyright.

It was not created by NeXT: https://en.wikipedia.org/wiki/Objective-C#History

There's no such thing as language copyright.

A programming language is under copyright of the creator. , which can decide to put it in public domain, standardize it, give an open source license to it, whatever.

While NeXT did not exactly created Objective-C, it acquired a license to be able to create their own implementation, which became the official Objective-C compiler.

Plus, lets see what is the outcome of the Oracle vs Google trial regarding copyright.

A programming language is under copyright of the creator. , which can decide to put it in public domain, standardize it, give an open source license to it, whatever.

Implementation -- yes, the language itself -- no.

What requirements would you have for a better C? The only major requirement that I can think of is that it's compiled directly to assembly without requiring an intermediate runtime or VM. This helps avoid the fundamental "chicken and egg" problem that a lot of higher level languages have. This means that GC needs to be an optional component. It does not specifically exclude it, however.

hi antirez, do you have a top contender from the current crop or a suspicion of which/what C Next would be like? I would guess that you would also eliminate languages like ATS, Felix, Rust and Vala.

What about languages like Cyclone, decac, Clay?

I think that languages that are a just thin shell on top of C (as opposed to languages that merely target C, e.g. Chicken Scheme) might have some potential. I'll call this coffeescriptification--still eagerly awaiting the first transpiled language that does nothing but remove C's braces and semicolons. If someone doesn't do it by next April Fools, I certainly will (I already have the name in mind: Glee (the italics are part of the name, and mandatory)).

Sounds like what you're looking for is the original Bash source code, circa 1977, for example [1]:

  LOCAL STRING	copyto(endch)
  	REG CHAR	endch;
  	REG CHAR	c;
  	WHILE (c=getch(endch))!=endch ANDF c
  	DO pushstak(c|quote) OD
  	IF c!=endch THEN error(badsub) FI
Applicable macros if you want to give your code that classic ALGOL 68 smell in [2].

[1] http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/...

[2] http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/...

Hate to be pedantic, but that's the Bourne shell, not bash.

Yup, totally correct and really not that pedantic. I realized that about two hours later when I reviewed my own comment and promptly punched myself…

I wonder why IF closes with FI, REP closes with PER and LOOP closes with POOL, but SWITCH fails to complete the series: it closes with ENDSW...

Why not go all the way with HCTIWS, when you've already abandoned your sanity to Cthulhu?

"An original idea. That can't be too hard. The library must be full of them."

I remember going through that code, and wanting to simultaneously reach for a spraycan of holy water, and claw my eyes out.

Similar efforts (the linguist at DRI who did a #define of totally new control structures . . . in Russian, though he wasn't Russian, it was just a lark) are why we can't have nice things.

Funny how everyone wants to program in BASIC but doesn't want to admit it. Like the Python and Lua guys. Try FreeBASIC, it's a compiler.

Haha! Or insert Pascal in there and try out some FreePascal, the perennial number 5 to 8 on most the alioth shootouts for the past 20 years.

Not sure if irony, or really dumb comment.

:D It has pointers too, will make a man outta you. Then you'll be able to easily move on to C++ as the OOP structure is almost the same.

You're making it sound like C++ with Basic syntax, which does not increase its appeal.

No, it's BASIC with the potential for C++ syntax when using objects.

I too am looking for this in Java. Especially if the transpiler was easily extendable to add your own language features. The key would be to make sure it transpiles both ways so that it could be adopted in a corporate, Java-only environment.

One thing often overlooked in language design is the filename. It's 2012, why are languages still stuck with IBM-era filename extentions when they could use unicode?! For instance:

Rust: filename.ℛ

Java: filename.☕

Glee: filename.☺

Isn't that just so much clearer and nicer-looking than ".rs" or ".java" or ".g"? It's really the fine aesthetic points like this that can make or break a language.

Brilliant. I'll be sure to mention your <s>name</s> memory address in the foreword of the obligatory O'Reilly book (taking suggestions for the cover animal; I'm partial to the thylacine, myself).

One suggested improvement: Rust's extension should preferably be .® to jive with the language's official logo.[1] And with the advent of emoji, we can give Perl a file extension of .🐪 (Unicode Character 'DROMEDARY CAMEL' (U+1F42A)). Perl 6 can get the bactrian camel instead.

[1] http://en.wikipedia.org/wiki/File:Rust_programming_language_...

I am sure your sarcasm tag is on. That would be horrible.

It's not clearer at all for me, especially when some of these characters won't render reliably on several platforms (for instance, the java char isn't showing in this browser right now).

Extensions and filenames can be made not to be so central, which I'd consider a more reasonable development over superficial silly hacks like that. This is IDE domain, definitely in 2012 and for years to come.

If you want to render your extensions to funny happy Unicode symbols in your specific purpose file manager, then fine by me. Actually storing filenames like that would just make you a number of new enemies.

Uhh, sure.

I have a couple of problems with their analysis of C.

There's no typed macros

There are inlined functions...

While the type system of C is basically size based, a lot of types have an ambiguous size (int for example)

Use stdint.h definitions.

Having said that, I would like namespaces in C!

Indeed, however this first blog post doesn't tell us much about the language itself. The couple examples seem a bit... anecdotal.

I'd also like to point out that setting individual bits in an integer is not usually atomic. So the "syntactic sugar" here could be pretty error-prone (and I can't say it was very high on my list of "features I wish C had").

But they've identified this as for systems based programming where you don't have any guarantee of stdint.h holding true (POSIX compliance). Their implementation is much more flexible. Albeit slightly reinventing the wheel.

I can see what you mean, but stdint.h is just a bunch of typedefs stardardising the naming scheme. It doesn't take much work to sort it out.

Indeed, linux uses u32 and friends for instance. It's really not much of a problem in practice (kernels are by definition very hardware dependent anyway...)

If you're doing systems level programming, you shouldn't be counting on anything being portable anyway. Reimplementing it is worthless for projects are on a single OS. If you're implementing the OS, then these decisions need to be made anyway, independent of the language.

You should be coding to(or with) the OS you're working on. If you're doing multiplatform stuff, then #defines with a naming scheme work, and are already the standard practice.

How about a few more default data structures, like resizable arrays or hashes? I find the biggest bump in my productivity in most other languages is that those two things are there already.

The problem is dynamic memory allocation. It's actually one of the gripes I have with the "rust" language [1]. As a C coder I find it hard to take seriously a language that don't let me customize memory allocation easily.

It's not impossible to do right though, C++ has "allocators" parameter templates to do exactly that (there's a default implementation that you can "override" with a custom class).

[1] https://news.ycombinator.com/item?id=3027777

To complement Marijn's response, here's a blog post from one of the Rust developers that gives a bit of an introduction into regions:


Rust isn't finished yet. It's about to get 'regions', which come with custom allocators.

Hey, thanks for pointing that out (and thanks to kibwen for the link).

I've been trying to use (or rather, bend) rust for kernel development lately, and memory allocation was my biggest limitation so far (including stack allocation). Definitely looking forward to future improvement on the language, it looks very promising.

At least one of the developers wants to make Rust's runtime optional, to allow the runtime to itself be written in Rust. That would probably also serve your goals as well, so perhaps you'd be interested in helping out? :)


"I'd like to see a 'runtime-less Rust' myself, because it'd be great if we could implement the Rust runtime in Rust. This might be useful for other things too, such as drivers or libraries to be embedded into other software. (The latter is obviously of interest to us at Mozilla.) Rust programs compiled in this mode would disable the task system, would be vulnerable to stack overflow (although we might be able to mitigate that with guard pages), and would require extra work to avoid leaks, but would be able to run without a runtime.

If anyone is interested in this project, I'd be happy to talk more about it -- we have a ton of stuff on our plate at the moment, so we aren't working on it right now, but I'd be thrilled if anyone was interested and could help."


Thanks for the pointers, I'll look into that when I have more time :)

Check out the binarytrees benchmark for an example of the placement new syntax:


Allocators were well intentioned, but writing them is not a walk in the park, and you cannot pass, say, a “std::vector<T, my_allocator<T>>” to a function expecting “const std::vector<T>&”.

The STL's templatization over allocators is over allocator class, not allocator instance which makes them effectively useless IMO.

That's why I mentioned "default". If you have more specialized requirements, you can write your own version. :)

Regarding the use of integers as bit arrays, it would be nice if the syntax supported referring to a subset of the bits, similarly to bitfields in structs.

Example based on the syntax from the article:

  x: int<+32>;
  x[0:6] = 42; // Set six bits starting with lsb 0 to 42

Yes, this is in the TODO list ;)

In fact, it doesn't reach top priority since we can do without for now (we have bitfields.) The issue with slicing is more a question of design: does slicing need to be constant expression or should we authorize more expression power (and then how could we implement that and how could we check it … )

Anyway, I'm looking for a clever way to define values accessed in a none uniform ways.

Integer as bit array was the simplest idea to test (and seems useful to me.) Now the compiler has almost everything I need to implement easily that kind of syntactic sugars.

I'm surprised of no mention of Bliss. Nice low-level DEC-10 language, later generalized a bit.

I remember reading about it in an article by Olin Shivers (the history of T, at http://www.paulgraham.com/thist.html ).

After reading the wikipedia page about it, it doesn't seem to be available for any common platform.

How would it perform as a system programming language today (compared to C) ?

It would work fine. It had bit-level operators, easy assembler integration, simple macros. Decent looping constucts after the first version. Everything was an expression which makes lots of things easier and a good optimizing compiler (rember Wulf?). It did have a major quirk: you had to be explicit on addresses vs. values. To get a value you put a '.' in front of an address. Used to drive some people crazy, but I never had any problem (much) with it.

If we are going to play at the low level, taking a look at Machine Forth might not hurt.

I guess its time I made my own language... everybody's doing it!

An uncommonly high number of people on HN (or mentioned in stories posted to HN) are programming language enthusiasts, so it certainly seems like it.

What could be more tedious than another low-level language based on subtle complaints about C?

Complaining that someone has invented a new low-level language?

What could be more tedious than another low-level language based on subtle complaints about C?

YALLBOSCAC? How about a language named YALL and boscc as the name of the compiler?

seriously? i personally think this is an enormously exciting space to watch people explore, and i'm glad that the field seems to be enjoying a renaissance of sorts at the moment. there are a lot of powerful features that modern languages have developed, and if some of them can be brought to the C world without sacrificing the low-level features that systems programmers need, it will benefit everyone.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact