Hacker News new | comments | show | ask | jobs | submit login
The reference D compiler is now open source (dlang.org)
566 points by jacques_chirac 22 days ago | hide | past | web | 303 comments | favorite



Good news indeed.

Switched to D 4 years ago, and have never looked back. I wager that you can sit down a C++/Java/C# veteran, and say write some D code. Here's the manual, have fun. They will with in a few hours be comfortable with the language, and be fairly competent D programmer. Very little FUD surrounding the switching to yet another language with D.

D's only issue is that it does not have general adoption, which I'm willing to assert is only because it's not on the forefront of the cool kids language of the week. Which is a good thing. New does not always Mean, improved. D has a historical nod to languages of the past, and is trying to improve the on strengths of C/C++ and smooth out the rough edges, and adopt more modern programming concepts. Especially with trying to be ABI compatible, it's a passing of the torch from the old guard to the new.

Regardless of your thoughts on D; My opinion is I'm sold on D, It's here to stay. In 10 years D will still be in use, where as the fad languages will just be foot notes in Computer Science history as nice experiments that brought in new idea's but were just too out there in the fringes limiting themselves to the "thing/fad" of that language.


I think a lot of languages that get popular have to have a thing. Like a thing they do well, and hopefully change the world a little...

Python had math and has lots more maths now. R also has math and started to kill Python, but now Python has Tensorflow and PyTorch. Scala has Spark. Java has Tomcat, and everything that followed, which is probably 20% of the world's mass by volume. Go has Docker Ruby has/had a railroad or something? JS has well, f... I don't know where to start here.

Does D have a thing?


> Does D have a thing?

Probably its most useful feature is rather subtle. It's very easy to express diverse ideas in code in D without having to resort to contortions. It's something people realize after having worked with D for a while.

It is not the result of having feature X (you can always find feature X in other languages or a way to make X work), it's the combination of various X's.

For example, some algorithms express naturally in a functional manner, some imperative, some OOP, etc. It isn't necessary to buy into a manner for the whole program, just use the manner that fits the particular part of the program.

You might like to use FP here and there, but don't want to deal with monads. You can use OOP for the AST, but don't want to box integers into an OOP class. You like dynamic typing for one type, but it's a bad fit for another. You can garbage collect for a quick prototype or a seldom used part of the code, and carefully manage memory for the release or the hot spots. (The Warp preprocessor I wrote did that. https://github.com/facebookarchive/warp)

There's a lot less time hammering square pegs into round holes.

I've seen more than enough presentations on "How To Do Technique X In Language Y" and everyone says how clever that is but the contortions are just too awful to contemplate.

The flip side is that D won't be satisfying to a purist adherent of any of those paradigms.


I really like how purity in D is so... practical. I like that a for loop is totally allowed in pure code as long as it modifies no data outside the pure block. One of the most annoying things in Haskell was to translate some algorithm that looks very natural in a for loop into some other sort of combination of map or reduce or something even more complicated.

(Yes, I know Haskell can also fake for loops more cleanly with monads, but they still feel awkward and unnatural.)


My recollection from the last time I worked with D was that pure could be more complicated and strict than expected when using standard libraries functions, e.g. using trigonometry sets hardware global state, thus impure. I don't remember if there was a good workaround to convey intent of purity despite that.


In D "pure" means "referentially transparent".


R is old. R always had math. That's not new. Python always had a lot of other things. Strange depiction.


I think you're a new engineer - R has always been math, from the bottom up. Python is the relative newcomer to math.

Go was exciting before Docker existed.

JS has the browser, and then had Node.

Scala has functional programming and the JVM. Spark is not really a "Scala thing" (although it is written in Scala). You can use other languages with Spark.

Tomcat is hardly the first thing that comes to mind with Java.

What a bizarre perspective you have!


JS has an actual write once run anywhere with an above-average async story.


Python was not popular because of maths. Python was very popular long before the scientific computing stuff. Python was chosen for that for two reasons: ease of extension using C, and being extremely easy to read and write.

There's a reason that Python is commonly referred to as 'executable pseudocode'.

But yes, Ruby had Rails, JS had web browsers, Perl made string handling easy, Java was appealing to managers (don't let your moron employees write undefined behaviour: CHOOSE JAVA!), etc.


I'm pretty sure Java's feature is that it has java.lang.NullPointerException


> D's only issue is that it does not have general adoption, which I'm willing to assert is only because it's not on the forefront of the cool kids language of the week. Which is a good thing.

The reference compiler not being open source makes a language not cool with the cool kids. (Not that I'm blaming anyone).


I'm not sure that "it's going to stay", as you say, but at the very least it's been hugely influential on C++ (if constexpr, anyone?) and even for just that it is and has been valuable for everyone working in C++ land.


I think D will hit some sort of critical mass soon, probably through the form of a corporate sponsor ( a la google and Go).

Obviously, it's entirely possible that D will indeed fade away, but that would be a real shame.

The way D is designed is very holistic: the combination of ranges, UFCS and CTFE (and modules) all made me really hate using C++. I've ended up being more productive (even with toys that I never touch again) in D than with that dynamically typed languages like python and js.


If constexpr is not inspired by static if. In fact, proposals of static if and things like it were explicitly rejected by the committee.


Walter, Andrei, and Herb proposed "static if" in N3329 (after it's big success in D) which was ultimately rejected. Ville Voutilainen explicitly revived it in N4461 with changes. Ville then iterated on that proposal a few times (P0128R0 and P0128R1). Finally, Jens Maurer took that and made some syntax changes with commitee input resulting in P0292R1 which was the accepted proposal. "static if" from D absolutely was the inspiration for "constexpr if". Each proposal references the last in a chain going back to Walter, Andrei, and Herb's proposal.


AFAICT, yes it was. Obviously, it wasn't accepted as-is, but it was certainly motivated (i.e. "inspired") by the success of "static if" in D.


I do not agree at all. It's very obvious functionality, and would be in D either way.


I'm not sure I understand what you're trying to say. IIUC the causality in your last sentence is opposite to what've you been arguing? Maybe it's just a typo, but either way you're going to have to provide evidence.


I meant 'would be in C++ either way', typo.

C++ has been gaining constexpr features for years. These have nothing to do with D at all, except in the sense that all languages everywhere influence each other slightly. Certainly constexpr wasn't drawn directly from D.

if constexpr was the obvious logical next step. It has been suggested and proposed for years. Yes people formally proposed static if, which was heavily inspired by D's static if, but that proposal was rejected because it was poorly designed, and C++'s new if constexpr is about as different as it's possible for two compile-time conditional compilation constructs to be.


How well does Dub work? That could be another issue preventing widespread adoption.


I want to say it's somewhat good, but sometimes it just doesn't work out too well, and experience may vary between developers. But I totally agree, if some of it's quirks were sorted out... Sidenote: I'm really sorry I can't mention any off the top of my head, it's been a year or more since I last touched it. I did enjoy using dub, but ran into some odd issues from time to time. If it breaks, you can't use it properly basically.


I use it for various smaller protects and have no issues with it. Looks good to me.


Works well for me, I use a subset though. I like dub.


Interesting. I prefer to write in high-level strongly dynamically typed garbage-collected languages when I can.

But of course I can't always do so and get the performance I want. My approach is generally to prototype in languages like Python, but implement in C89. See: https://www.youtube.com/watch?v=WNTOpl30MIQ . That's 10s of thousands of times faster than our initial prototype and...

We profiled all sorts of things in order to make it that fast. Right down to cache misses, branch mis-prediction, etc, how concurrency on the CPU interacts with I/O.


What do you consider as "fad languages"?


Not going to start a flame war. Everyone has their opinions, However there are languages that come and go, and then are swift to become popular, but then are left to rust, because people stopped having smalltalk about them.


Despite the puns, as long as people start building production systems with those languages, they're going to stick around.

And several of those languages are probably used by 10x or more people than D is.

Luckily, there are so many things to build and so many programmers out there that almost any languages gets a spot under the sun these days.

When was the last time a programming language died out? My guess the last one was a language tied to its hardware...


Coffeescript is essentially dead, many production web sites were written in it.


For what definition of dead? Sure, not so many use it for new projects, but it is still being maintained. There is an upcoming major release which will output ES2015.

I've got some tens of libraries/apps written in CS, no plans to migrate them. Works as well today as it did 3 years ago.


> For what definition of dead?

It's at 71.0% (5th from top) on the Dreaded tab here:

http://stackoverflow.com/insights/survey/2016#technology-mos...

> I've got some tens of libraries/apps written in CS, no plans to migrate them. Works as well today as it did 3 years ago.

You might have a different view to the above then. Sounds like you're using it for the kinds of things it's good for, perhaps. (Languages can fall into vogue and (infuriatingly) go viral for all the wrong reasons... with everyone fighting it and not understanding why)


Not sure I'm using it for anything special... But one reason I chose it is my preference for minimal syntax, which is not a fashion thing and has not changed since I did Python 8 years ago. It also fixes some bugs in JS, like defaulting to globals on omitted `var` or implicit conversions on `!=`.

I do wish to have gradual static typechecking available though. This may unfortunately force me over to TypeScript or ES2015+Flow.


Heh, that's kind of funny. If that's the main criteria for "death", then Coffeescript is in good company. Except for Visual Basic, which is basically a cockroach refusing to be killed by Microsoft, everything else is alive and vibrant.

Possibly horrible by geek purity standards, but waaaaaay more used than D :)


I think it says a lot that the only top 5 in the "Loved" tab that's also present on the whole "Wanted" tab is Swift. Not that I use Swift, but it's the one with a mainstream use case today pretty much. It will be interesting to see how it looks in 5 years.


> languageS that Come and go, And then are swift to become popuLar, but then Are

> languageS Come And popuLar Are

> S C A L A

Did I do that right?


I think you hit that Node, with a ruby of a pun, and are on the right railroad tracks my friend.



Yes, he's certainly Going Forth.


I agree except the swift part (I don't even code in Swift). It's impossible for it to die because of iOS unless Apple kills it for something else.


Go is not going anywhere.


Unfortune, isn't it? We have D, Swift, Rust, Kotlin. Go looks very sad (and borderline ridiculous) compared to these.


In trying to group those together, the best my brain has been able to come up with is a raspberry-flavored pretzel.

I'm... slightly confused. :/


Practically? How so?


That's a brainfuck


;)


To paraphrase: On a long-enough timescale all languages are fads.

The ultimate power that can actually be manifested in physical reality (as far as we know) is still Turing Machines[1]... the rest is mostly ergonomics.

[1] Well, QM may add a "little bit" of efficiency, but it doesn't add any oracular power, per se. AFAIUI, at least.


FORTRAN users beg to differ. The horse they backed will be here until the lights go out because it probably is used to power the lights at some level.


Yes it most certainly is. All the power system applications written in the 60's and 70's were done so in Fortran as they had to be fast and run numerical analysis on massive matrices. However, a lot of it is being rewritten in more modern languages like C++ or C depending on the application.


> However, a lot of it is being rewritten in more modern languages like C++ or C depending on the application.

And a lot of it is embedding FORTRAN or COBOL in the new pretty C/++.

New skin, same system.


> numerical analysis on massive matrices. However, a lot of it is being rewritten

In Julia, too.


Probably some, but in the industry I'm referring to, nobody is going to let you write a critical system in some new relatively unknown language. Options are Fortran, C, C++, or Java. Julia is cool, but still really new in the cosmic scale.


> nobody is going to let you write a critical system in some new relatively unknown language.

Whose permission do you seek who will "let you" or not "let you" write a critical system in some new relatively unknown language? What if you work for yourself or are creating a startup?


Obviously a startup with no history is a totally different context. The parent commentator did mention "industries". Power in particular (think nuclear power especially!) are... necessarily conservative.


I was surprised to learn that Blackrock uses Julia.

https://juliacomputing.com/case-studies/blackrock.html


Ha! I can't say I've ever programmed in Fortran[1], but surely it too will perish... eventually. Idle meta-thought: It's actually kind of interesting how hard it is to replace "legacy" in programming... I wonder if there's something we could/should be doing to make it easier to do on a large scale?

[1] I think it's spelled that way these days, but of course given the context of the thread, maybe you're talking about old-school FORTRAN.


I think you answered yourself why D is not more popular with the "cool kids". C++/Java/C# are overengineered, verbose, horrible languages to use. Java is a bastard of C++, and C# is the better looking bastard.

Yes we use them, and even like them in a weird Stockholm Syndrome way, but they're not fun to work with. We use them because we need them, not because we enjoy them.

Disclaimer: the personal pronoun "we" as used here means the author plus zero or more people.


Java is very simple and has, hands down, the best tooling of any major language out today. That is why it is the most popular.

Java isn't my favorite language, but even I can remove my blinders enough to see its strengths.


Say what?

The words that come to mind when I think of Java tooling is certainly not "best". More like "Bloated, slow, too-fscking-much-XML"


Name another language + tooling ecosystem where you can do ALL of: * Perform automatic refactoring of a 100k+ LOC project and be confident that absolutely nothing breaks. * Reliable edit and continue in the debugger. * Fast incremental compile times. * Easily pull in third party libraries without messing with include-dirs, link-dirs and whatnot, and automatically get the documentation built in and ready for autocompletion.

I won't list all other things you can do but no other popular language even come close to the tooling of Java or C#. C and C++ has too many different build systems and too much lazy evaluated templates and typecasts for the IDEs to handle well. Python, Ruby, JS, PHP...well they are all dynamic so good luck with fancy IDE features.


Do not go up against Erlang. You will lose.


Erlang as some pretty dope tooling, but still Java has a much larger selection and some low-level tools that are just fantastic for performance (JMH is one of my favorites).


Spend a week trying to get the project to actually run on a computer, with a project you haven't touched before...


XML? That's probable Spring. I hate that thing. But Java's tooling for monitoring is crazy good. The benchmarking harness that gives back the assembly code generated for hot code, unmatched on any other non-native language. The editors and how good hinting and auto-complete are, I don't know of another language that integrates so well.

Java really raised the bar on tooling and other languages/platforms have really had to drive hard to get just a close comparable place.


try spring boot. not a line of xml :)

https://spring.io/guides/gs/spring-boot/


A quote I found one time, can't remember who: "When your framework has a framework, there's a problem."


Whenever someone brings up XML as a critique of Java, they might as well be saying, "I haven't glanced at Java in well over a decade".

There's just no faster route to completely discrediting yourself in eyes of anyone who's actually used it recently.


Maybe you're a better programmer than you are a configure-r. But its tooling is insanely, insanely good.


Java had a full court press advertising campaign with powerful partners at a time when 'web' was the new thing and it was going to be that language. The superior support and tooling was a result and sustains it to this day.


Walter, thank you so much for finally doing this! I am so happy that Symantec finally listened. It must have been really frustrating to have to wait so long for this to happen. I have really been enjoying D and I love all the innovation in it. I'm really looking forward to seeing the reference compiler packaged for free operating systems.

Thanks again, this news makes me very happy!


You're welcome! It makes me happy, too. I've wanted to do this for a very long time.


In addition to your great work, Walter, Symantec really needs extra acknowledgement for the effort on their side in releasing their right to claim in order to make this happen.

It seems we haven't had a lot of good things to say lately about Symantec and I hope this gives them something positive to celebrate.


Just to pile on: Kudos for this!


I don't understand: I thought Walter owned the copyright, why does Symantec have anything to do with it?


Symantec purchased Zortech C++, and created Symantec C++. Symantec left that business, and licensed the code to Digital Mars C++, which was later used as a basis for DMD.


I remember using Symantec C++ as well. I was a teenager and on a very tight budget at the time. It was easily the best C++ IDE and compiler you could buy at the time for the price.

I might have to give D a shot now just due to nostalgia.


Honestly, since I'm slightly psychotic about these things, this is a kind of huge to me. Part of the reason I never learned D was because the compiler was partly proprietary.

Now I have no excuse to avoid learning the language, and that should be fun.


There have been fully open source compiler for D for a long time: LDC and GDC.


Yup, but to be fair, they sometimes lack quite a bit behind dmd.


Just to clarify:

LDC master is on 2.073.2 right now (the latest DMD release), with a 2.072.2-based release imminent, and work towards 2.074.0 (currently in beta) already underway.

GDC is sadly a different story, though.


LDC follows dmd pretty closely nowadays. Also this doesn't make an good excuse for not learning the language.


I refused to touch Swift until Apple open-sourced it for the same reason :-)


No problemo - dmd works on the Mac :-)


And best of all, it's the Boost license!

Here it is:

https://github.com/dlang/dmd/pull/6680


Why is this a good thing? I'm not familiar with the Boost license.


In short: it is extremely permissive, very similar to the MIT license but even more so (the license does not need to be included with binary copies).


http://www.boost.org/users/license.html explains it better than I can.


It is the least encumbering, very permissive and very well regarded in both oss and corporate circles. Really a good choice to stick with it.


We liked Boost because it is "corporate lawyer approved". You can make both open and closed source derived works from it.


That doesn't make it good. That makes it bad. It doesn't protect the rights of users. It's really important that developer tools are GPL, as that prevents vendor lock-in and EEE tactics.


How is preventing me from making a closed sourced derivative of dmd in any way protecting my rights as a a user of the source?


A two-edged sword in my opinion. GPL always protects the end-user so they can get the source, but it infringes on the creator trying to make closed source software. So as a user I like GPL, but when programming I tend to mostly avoid it. Mine mine mine! :)


It doesn't protect YOUR rights as a user, it protects other people from closed source software.


I would be careful calling it the "least encumbering"- there are definitely shorter, easier to understand licenses with fewer restrictions (i.e. zero).


We investigated public domain, but that has international legal problems with it. Boost was the best solution.


Do you have any comment son international legal issues with public domain? Is that not recognized in some places?


The main problem with the public domain is that there is no such thing as "the public domain". Each country has its own copyright law, with its own terms and conditions. What the public domain means in the US is not what the public domain means elsewhere i.e. Germany. In Germany an author has Moral Rights which the author is incapable of giving up and may only transfer as apart of their will. So rather than require all German contributors to be dead and have a will that grants their moral rights to the project, using a permissive license gets the same effect.


SQLite ran into trouble by using public domain, due to it not being recognized and for other reasons, and as a result will sell you a copy under a different license that guarantees your rights.

https://www.sqlite.org/copyright.html


It was not recognized in some countries, as I recall. Maybe things have changed since we decided on Boost a decade ago, but who cares. Boost works.


Several European countries do not recognize the ability of copyright holders to completely give up their copyright. So it needs a license of some kind.


And that's why the Unlicense has a BSD-style "Anyone is free to…" clause.


What about CC0? I thought it was explicitly made to be a public domain replacement where there is no "public domain"?


Does that mean that the backend can be rewritten in D at some point? Speaking of which, there could be a standard D intermediate language (Or is there one? I've never at the glue code in between the frontend and backend/s)


All an intermediate language does is slow the compile speed down.



Congratulations on getting this through.


And it was just merged into the master branch!


Why is that good?


Good to hear the news, and congrats to all involved.

Since I see some comments in this thread, asking what D can be used for, or why people should use D, I'm putting below, an Ask HN thread that I had started some months ago. It got some interesting replies:

Ask HN: What are you using D (language) for?

https://news.ycombinator.com/item?id=12193828


Too late to the party to add my answer to your AskHN, but here we go: I use D at Netflix for machine learning backend.


Please consider adding to http://dlang.org/orgs-using-d.html


Super interesting, could you say a little more ? Any blog posts that talk about this. From what I gathered from HN part of Adroll's ML is in D.


Interesting, thanks!


Interesting change! Before, people had a choice between the proprietary Digital Mars D (dmd) compiler, or the GCC-based GDC compiler. And apparently, since the last time I looked, also the "LDC" compiler that used the already-open dmd frontend but replaced the proprietary backend with LLVM.

I wonder how releasing the dmd backend as Open Source will change the balance between the various compilers, and what people will favor going forward?


Many people have been reluctant to contribute to the backend because of the license, which was very understandable. Now this barrier is obliterated.


Each of dmd/gdc/ldc has their individual strengths and styles. It's an embarrassment of riches.


> Each of dmd/gdc/ldc has their individual strengths and styles.

I'd be really interested to hear a comparison of those from someone experienced with D and its community.


Professional D coder here.

- DMD is by far the D compiler with the shortest compile time. It's actually so fast that it comes with a utility, 'rdmd', which compiles-then-execute a D program. Say goodbye to shell/perl/python scripts, now you can have compile-time checks without an explicit/slow compilation step. We use it for automation tasks.

- GDC is the compiler of choice when it comes to supporting multiple targets and cross-compilation. It closely follows GCC and G++ command line conventions, meaning that using a single properly written Makefile it's easy to target x86/x86_64/arm, GNU/Linux (or Windows, but the mingw support has been in statis for years). Debian comes with pre-compiled arm-targetting GDC compiler. gdb, gcov, and operf work well. The generated code is fast (more than LDC), we use it for heavy computation tasks.


(Disclaimer: LDC developer and former maintainer here.)

> GDC is the compiler of choice when it comes to supporting multiple targets and cross-compilation.

That entirely depends on what your targets are. GDC has some claim to "support" more architectures than LDC in that it can generate code that interfaces with C for just about every target GCC implements. However, this is only part of the story – "support" is a dangerous word to use here, as full-blown D runtime support is much more limited.

If you look at the most common platforms for desktop and mobile applications, LDC is definitely in the lead – in particular, LDC targets Windows/x86 (both 32 and 64 bit), macOS, iOS and Android/ARM, neither of which GDC supports. Some other platforms like AArch64 or PPC64 are beta quality on LDC, but have not received significant work on GDC. Both compilers support Linux/ARM (but admittedly GDC might be a bit more stable there).

> The generated code is fast (more than LDC)

[citation needed] This doesn't match my experience. More often than not, GDC and LDC are pretty much head to head, apart from small differences either way from the different backends (GCC vs. LLVM). In addition, LDC can benefit from some (minor) D-specific additions to the backend optimizer and some target-specific standard library optimizations, though, and sometimes has performance fixes that have not landed in GDC yet.

weka.io (one of the biggest D deployments, high-performance software-defined storage) and the Mir numerics library (very competitive performance numbers) both use LDC. If you have a real-world example where GDC generates significantly better code than LDC, please consider reporting it on the LDC issue tracker so we can look into fixing it.


>- DMD is by far the D compiler with the shortest compile time. It's actually so fast that it comes with a utility, 'rdmd', which compiles-then-execute a D program. Say goodbye to shell/perl/python scripts, now you can have compile-time checks without an explicit/slow compilation step. We use it for automation tasks.

Right. See:

http://www.infognition.com/blog/2014/d_as_scripting_language...

Also of interest:

Why D?

http://www.infognition.com/blog/2014/why_d.html

Both pages down right now, maybe the site is down, but have read those pages earlier. Google cache or archive.org may also work. Infognition is a software product company (video and related products) that does a good amount of their work in D. No connection them, just saw the site while browsing for D info earlier.


Both those pages are working for me now. Might have been a temporary CDN or similar issue.


Would you mind describing the nature of your work where you use D. It seems you are in a business where performance matters, this only heightens my curiosity (erm envy).


How good is D for:

-iOS? -Android? -Windows?


iOS: Experimental https://wiki.dlang.org/LDC#iOS_.28iPhone_OS.29

Android: Experimental https://wiki.dlang.org/Build_LDC_for_Android

Windows: Works. Here is the Visual Studio Integration https://github.com/dlang/visuald


Ios kind of works but is lacking bitcode on mobile devices.

Lacking a bit wrapping of system and gui calls but for android see dlangui and Jni is not so bad.

It's very practical to write libraries in D. Possible to write whole apps, but I wouldn't start there for now.


On Windows it's fine, but same library packaging problems as for C++...


DMD compiles really fast. At the level of Go. As the reference compiler it always has the latest features.

LDC compiles to fast code and only slightly lags behind the reference. It might even catch up soon.

GDC compiles to fast code and supports many architectures in theory. It lags behind the most and architecture support also requires a ported runtime.


The download page gives a brief summary:

http://dlang.org/download.html

More information on the wiki (also linked to from the download page):

https://wiki.dlang.org/Compilers


Please don't get me wrong, as I don't want to start a flame here, but why do they call D a "systems programming language" when it uses a GC? Or is it optional? I'm just reading through the docs. They do have a command line option to disable the GC but anyway...this GC thing is, imho, a no-go when it comes to systems programming. It reminds me of Go that started as a "systems programming language" too but later switched to a more realistic "networking stack".

Regards,


Systems programming languages with GC exist since the late 60's, with ALGOL 68RS being one of the first ones.

Since then a few remarkable ones were Mesa/Cedar, Modula-2+, Modula-3, Oberon(-2), Active Oberon, Sing#, System C#.

The reasons why so far most of them didn't won the hearts of the industry weren't not only technical, but also political.

For example Modula-3 research died the moment Compaq bought DEC research labs, or more recently System C# died when MSR disbanded the Midori research group.

If you want to learn how a full workstation OS can be written in a GC enabled systems programming language, check the Project Oberon book.

Here is the revised 2013 version, the original one being from 1992.

https://people.inf.ethz.ch/wirth/ProjectOberon/index.html


D's garbage collector is both written in D and entirely optional, that alone should qualify it as a systems language.

The gc can be disabled with @nogc, the command line flags are only if you want to disable it for the whole program or if you want warnings as to where the allocations happen. https://godbolt.org/g/IQ0O06


> https://godbolt.org/g/IQ0O06

(And, of course, the entire main() disappears on -O1 and above: https://godbolt.org/g/FAWtak.)


The GC is only relevant if you allocate using the GC, because that is the only time the GC can run. If you use @nogc on your functions, you are guaranteed not to have GC allocations. You can use D as a better C with no GC but other good features. You can even avoid the D runtime compeletely if you want.


Thanks for your helpful answer.

...and I don't understand why some people have downvoted my question. Anyway, I'll continue reading the docs :)


I don't understand the downvotes either.

Anyway, the docs are not that great, so Mike Parker has started a blog post series about the GC.[1] If you have questions, drop them in the d.learn forum.[2] They're pretty friendly (most of the time).

[1] http://dlang.org/blog/2017/03/20/dont-fear-the-reaper/ [2] https://forum.dlang.org/group/learn


Didn't downvote, but found "Please don't get me wrong, as I don't want to start a flame here, but…" redundant and slightly annoying. ;)


Why does gc disqualify a language as a system's language?

P.S. I think of a system's language as one that runs directly on the machine, e.g. Swift, C, go. They operate at the "system" level.


You can definitely do systems programming with a GC, but it does get in the way sometimes. Drivers will most likely need to bypass it entirely; more importantly, libraries to be embedded in other programs (including scripting languages) now have to deal with two garbage collectors rather than one.


> Why does gc disqualify a language as a system's language?

AFAIK a "system programming language" should have deterministic performances, obviously Go hasn't. But different people might define "systems" differently.


The people who define Go as a "systems language" use their own terribly useless definition of "systems language". Per wiki: "For historical reasons, some organizations use the term systems programmer to describe a job function which would be more accurately termed systems administrator."

So for them, a language that can be used by DevOps Engineers is a systems language. While for sane developers, a systems language is one with deterministic attributes and the ability to be compiled to native code with no/a minimal runtime. I.e. one you can code "a system" in.


I.e. one you can code "a system" in

Yeah, but "a system" doesn't necessarily mean "an operating system". There are lots of kinds of systems. Most people I know, who I've talked about this with, consider middlware'ish development (think, message queuing systems, application servers, etc.) as an aspect of "systems programming".


People have been writing OSes with GC enabled systems programming languages since the late 60's.

Would you consider the employees of UK Royal Navy, Xerox PARC, ETHZ, DEC, Compaq, Microsoft as insane developers?


Also, does the Go standard "require" a mark+sweep?

If you write an implementation which uses Reference Counting, it can be "deterministic" and won't require a runtime.


Sure, but then you'll leak data every time there's cycles in your data structures.


Also, it depends on how you define an "OS". If you're in the GNU/BSD/Windows/MacOS camp (where init, ls, sh, cat, etc. Are part of the OS), you can have go (as well as java) in the OS.


I think Go should be pretty deterministic, if you don't allocate or free at all at runtime.

Just like C. You lose determinism with malloc() and free(). As any embedded/kernel developer knows, malloc() takes often unacceptably long time. Or even free().


Go does allocate and free at runtime, it just isn't explicit because its a garbage collected language and the garbage collector handles most of the freeing for you. Different GC'd languages handle allocations differently, some have a keyword, some do it whenever creating an instance of a type over a certain size.

With C, C++ and Rust allocating memory often boils down to calling something equivalent to malloc and free. While the cost is not precisely known all modern OS provide guarantees are the time to execute relative to the size of the amount requested. This is almost always such a simple and fast operation that allocation is what gets optimized only after the algorithms and data structures have been tuned and this is a known bottleneck. Many application never get to the stage of optimizations (Games almost always do, stupid fixed time frame budget).

Consider the amount of work the GC does and understand why any GC is generally considered no-deterministic: https://blog.golang.org/go15gc


Does modern C++ or rust allow you to say "deallocate this pointer here" (like free or delete)?

I was under the impression that Rust (and safe_ptr) deallocate at scope end, which could also cause framerate issues (unless you do ugly scope hacks).

I do agree that you're unlikely to bump into this issue, though.


Yes, you can explicitly call drop ( https://doc.rust-lang.org/std/mem/fn.drop.html ) in Rust, which just uses move semantics to force it out of scope. A similar function could be implemented in C++, and called like drop(std::move(value)).


The are a couple of options in C++.

You can just use delete, as you mentioned, it is a C++ only construct as far as I know.

For the std library smart pointer, you can get their value (the raw pointer), delete that then assign nullptr to the smart pointer. I would consider this a code smell, and ask hard questions of the authors of such code.

The simplest thing to do is to add new scopes. You can introduce as many blocks with { and } as you like. It is a common pattern to lock a mutex with a class that releases the mutex in its destructor (and acquired it in the constructor). The std library includes std::lock_guard[0]. To insure the smallest possible use of the lock a new scope can be introduce around just the critical section and the first line of the block can pass the mutex to the scope guard and it should be about as small and efficient as can be, while be exception safe and easy to write. Hopefully this is also easy to read.

You can introduce new scopes with std::shared_ptr or std::unique_ptr as well. This seems common and reasonable.

[0] - http://en.cppreference.com/w/cpp/thread/lock_guard


With Rust there is the drop() function in the standard library (which is just a no-op function that takes an owned value as an argument) which allows you to shorten the lifetime of the value without having to do ugly things with blocks.


There are techniques used in GC languages to reduce or even eliminate runtime allocations. Basically - you re-reuse objects and pre-allocated collections rather than creating new ones. These techniques are common in games programming; and libraries like libgdx[0] (for Java) offer object pools to make this easier. I know some Java server applications use similar techniques.

[0]https://github.com/libgdx/libgdx/wiki/Memory-management#obje...


I can have non-deterministic performance even when using C. Also if I use Boehm GC with C, does that disqualify C as system language?


Is GC with C mandatory? it is in Go. See the difference? C doesn't come with a GC.


GC in D is NOT mandatory.


Could one write a driver in D?


There's a fairly recent growth in the D community of a "better C" movement, that is, people using D for lowest-level systems development, device drivers, etc. This seems to have lit a fire under initiatives to reduce D's GC dependency, and its runtime dependency in general. I don't follow this too closely, but it seems they are still at the "hacks and experiments" stage in terms of real-world use (e.g., people writing custom runtimes that stub out the GC, etc.).

DConf (http://dconf.org/2017/schedule/) is in three weeks, and there are a number of related talks. Once the videos are out, you might get a better sense of the current "D as a better C" landscape.


Here is someone writing a kernel in D: https://github.com/Vild/PowerNex



Ah, the memories...


Yes – there are a number of research and/or hobby projects that implement kernels, drivers, etc. in D.


Yes


Thanks!

Still reading the docs....D seems to have a very clean design. No baggage from some "glorious" past (70es, PDP, IBM 360 etc).


Something I always thought was cool about dlang was that you can talk to the creator of the programming language on the forums. I don't write much D code as of now, but I always visit the forums everyday for the focused technical discussions. Anyways, congrats on the big news!


This was something that always rubbed me the wrong way about the language, and it was an impediment for adoption for me (for D, but also Shen and a few others). In this era, there is no excuse for a closed source reference compiler (I could care less if it's not a reference compiler, I just won't use it). I'm surprised it took this long to do this, it seems like D has lost most of its relevance by now...relevance it could have kept with a little more adoption. I wonder if it can recover.


This is exactly my thought. I was really excited about D1 at the time when there were no Go/Rust/Swift/... I called D the C++ should be. I recommended D to almost everyone. They liked it. Some of them even wrote non-trivial programs in D. However, all of them went back to C/C++ later during the painful D1 to D2 transition amidst unnecessary clashes. It was a mess IMHO, which deeply hurt D's adoption. D2 managed to reach a consensus among different parties in the end, but at that point, Go was stabilized, Rust started to look promising with more exciting modern features. D is not the shiny and unique language any more. D had a chance to gain popularity, but now it has lost the momentum. It is a pity.


Hi... You have a superb blog. Somebody right now on forum is trying to port your hash map code BTW. Yes past is a shame, though it's momentum relative to itself that matters for a language more than versus others. If Go takes off for network services it doesn't particularly hurt D, because total market is so big. Language has really taken off past years, and I use it within a hedge fund environment. Work with some refugees from C/c++ - we couldn't imagine returning to those languages for what we use D for.

One sample project we built for internal use is here: https://github.com/kaleidicassociates/excel-d/blob/add64bit/...


That's intriguing. I've done a bunch of XLL coding in C++ and C# (with Excel-DNA) for trading floor Excel over the years. Gotta wonder what you're building in D. Have you got a quant analytics lib in D? Or maybe market data connectivity, or historical data? Anyway, you might want to take a look at this [1], which can serverize XLL spreadsheets with no sheet or code changes. Works with RTD too. Once a pricing or risk sheet is serverized, traders don't have to hit F9 or rekey data anymore...

[1] http://spreadserve.com


Thanks for sharing.

Friend of mine started a company called Resolver Systems to do that for Python (end result). Nice experiment but bad timing for launch just before the crisis. Have got some algorithmic and infrastructure things in D, though plenty is done in other languages too. I ported Bloomberg API to D - it's open sourced bit currently not yet directly used in production. May start to be in coming months. It's not so hard to turn spreadsheets into code (its rarely the spreadsheet itself that does something complicated) so would rather rewrite that than try to do it automatically because code is easier to read, though I had dinner with a Dutch girl who is a professor who works on spreadsheets as functional languages.


Giles Thomas? I remember going to the Resolver One launch in 05 or 06ish. About the same time as I was working with a couple of rates traders plugging monster spreadsheets into my etrading infrastructure; one doing rates structured notes pricing and quoting on Bloomberg, and the other market making ETO structures on Liffe. Market making out of a spreadsheet on a limit order book was challenging! Those sheets were too big to turn into code quickly, and the traders wanted to retain direct control over the pricing logic.

The Dutch girl must be Felienne :) "Spreadsheets are code"!


>Giles Thomas?

He and Harry Percival (IIRC) later founded PythonAnywhere.com , a Python environment (and more) in the cloud. It has some nice features.


D is easier than Rust and safer than C++. That is a valuable point IMHO.


And it also interfaces very nicely with C/C++ libraries and code.


There's a wide range of possibilities int that statement that include the case where you would almost always want D over Rust, and the case where you would almost never want D over Rust.

The question is, how much easier is D than Rust, and how much safer is D than C++.


D recently has been substantially upgraded to prevent memory corruption bugs. I'll be talking about how that works at DConf in 3 weeks.


There is no quantitative metric for safety and easyness.

If you can afford a garbage collector, D easily wins over Rust. If you really need the safe manual memory management, Rust wins. In between is still a large grey area.


> If you can afford a garbage collector, D easily wins over Rust.

This is the "the only point of the ownership/borrowing model is to avoid GC" myth that I've been working to kill. Rust's model also gets you data race freedom, to name just one other benefit.


There are plenty of reasons why Rust could win outside of requiring safe manual memory management. It is an expression based language with ad-hoc polymorphism, algebraic data types, hygienic macros, first class functions, and race-free concurrency.

I don't use rust because I don't need manual memory management and I require subtype polymorphism for a lot of things, but I choose Scala over D.


Neither Rust nor Scala hold a candle to D in terms of compile-time introspection. You can do things with it that are near unbelievable - "you must ask yourself if some law of nature was broken" quoted from memory in http://dconf.org/2015/talks/alexandrescu.html


Scala developers can do whatever they want or need at compile-time.


> race-free concurrency

Rust's thread model is free from data races (a thread must have exclusive access to a variable in order to write to it), but not from race conditions in general.


Could you give an example of a kind of race conditions allowed in Rust?

(Sorry for a naive question, just thought that a data race and a race condition are synonyms. What else is there to race over if not shared data?)


The "data race" term is jargon (i.e. means more than just "race on data") that specifically refers to unsynchronised concurrent accesses to a piece of memory/resource (where at least one of the accesses is a write), while a race condition is a higher level thing, where things aren't ordered the way one intends, even if they have the appropriate "low-level" synchronisation to avoid data races. http://blog.regehr.org/archives/490

Rust can't prevent one from doing all the right low-level synchronisation in the wrong order. In fact, I don't think any language can, without somehow being able to understand the spec of a program: something that's a race condition in one case, may not be a race condition elsewhere (this differs to a data race, which isn't context dependent).


Rust does not prevent all race conditions because doing so would be impossible. It would be akin to solving the halting problem.

Deadlocks, and other synchronization issues are just some 'general race conditions' Rust can't solve.

Rust's prevents 'data races' defined as:

-two or more threads concurrently accessing a location of memory

-one of them is a write

-one of them is unsynchronized

https://doc.rust-lang.org/nomicon/races.html


I recently (past few years) invested in learning Clojure, enough to get some practical things deployed, but I am far from being a Clojure advanced developer. I wanted to learn a Lisp dialect and am glad I chose Clojure. Rich Hickey is a guru. I don't have any gripes with Clojure but I'm not too deeply invested. Several years ago I was also enamored with Julia, which I was investigating for CPU intensive programs, but never had the time to commit and I was concerned about Julia being too youthful and possibly not getting enough traction. Should I learn D for CPU intensive programs? Also, with Clojure I can make use of the many Java libraries for I/O due to Clojure's good Java interop. Does D have the analogous benefit of tried and tested I/O libraries? Thanks for any suggestions, D has my intrigued.


Is the GC mandatory in D? I was under the impression some time ago that D could run GC-free, but the standard library still widely used GC. Did I just misunderstand?


You can tag a block @nogc, if you just need to avoid it for a specific part of your program. This can be done with the whole program, but quite a lot of the standard library requires the GC. I should point out that the GC has no problems that affect day to day usage, the stuff people complain about is when it's running for a while (it's conservative atm so it leaks).


If you can tag the entire program @nogc, great.


In my understanding, you're overall correct, but they've also been working on reducing that need, making more and more of it work without a GC.


Currently working on making the Exception handling system usable without the GC. Expect it soon.


Great! (and, congrats on the re-licensing.)


You can go easier on the GC or avoid it altogether. See this article and the discussion:

https://news.ycombinator.com/item?id=13914308


> how much easier is D than Rust

https://z0ltan.wordpress.com/2017/02/21/goodbye-rust-and-hel...

D gets tricky for resource management and avoiding GC.

> how much safer is D than C++.

Bounds checking and default initialization alone fix most memory errors you may have in C++.


I agree it is valuable.

But I still think there are better alternatives now. Depending on your priorities and constraints, OCaml, Go, F#, Scala, and Swift all fit the same description (easier than Rust, safer than C++) and they're in the same realm for performance (slower than C, C++, Rust, etc., but not by much). D could have been there if they had a decade or so with a bigger community.


D certainly has made a few mistakes and having a non-free backend in dmd was one of them. I doubt this could have been avoided though. It was just technical debt, which Symantec now graciously payed off. I have great respect for the Rust people, because it seems they do everything right so far.

I am a sucker for language discussions, so let me explain what my problems are with your suggestions.

Ocaml: No support for parallelism, community is even smaller than D, do they have a package manager, yet?

Go: I like generics/templates. If I throw away type safety, I might as well use Python.

F#: I use Linux. The .NET ecosystem is not strong here.

Scala: I don't like the JVM. Also, the Scala compiler is slow.

Swift: Solid ideas, but held back by Object-C compatibility. I don't build iOS apps, so why bother.


yes, ocaml has opam as a package manager now. the community has also grown quite a bit recently.


Sure, but it's not just about whether you fit in the range or not, it's about language design and tradeoffs. Not everyone will appreciate D, but I really think it hits a sweet-spot of language features that I haven't found elsewhere.

I've enjoyed programming in Ocaml, Rust, C++, Haskell, many Lisps, etc. -- they are all excellent languages. But I find (to my own surprise) that I often come back to D when playing with a new design, exactly because of that sweet-spot. I highly recommend giving it a try if you haven't already.


That's fair. I love Scala but hate the JVM and all the compromises Scala has made to shoehorn itself into the JVM. Even though there is now a scala-native, I'd still probably not use it until it matures to the level that D is at, and D (or OCaml) would probably be a better choice for me for those usecases.


I recently invested in learning Clojure, enough to get some practical things deployed, but I am far from being a Clojure advanced developer. I wanted to learn a Lisp dialect and am glad I chose Clojure. Rich Hickey is a guru. I don't have any gripes with Clojure but I'm not too deeply invested. Several years ago I was also enamored with Julia, which I was investigating for CPU intensive programs, but never had the time to commit and I was concerned about Julia being too youthful and possibly not getting enough traction. Should I learn D for CPU intensive programs? Also, with Clojure I can make use of the many Java libraries for I/O due to Clojure's good Java interop. Does D have the analogous benefit of tried and tested I/O libraries? Thanks for any suggestions, D has me intrigued.


There are many languages that fit certain general descriptions, but that doesn't mean that the experience of using them will lead me to think they are at all comparable in terms of how well they help me solve the kind of problems I face. Go and Scala - I somehow dont think they are close substitutes for each other. The kind of person that likes D's generics will not be the kind that is ultra happy with Go. The kind that likes the short learning curve and plethora of network libraries in Go might not be thrilled by D early experience. It all depends.


Whats special about D? why should i learn it?


I think one of the very good features of D is that it provides a lot of infrastructure that helps with lots of little generic things -- things that are not really needed in the language, but help reduce the amount of code you write or simplify common actions you take. It seems like Walter (and all the other contributors) have distilled their experience writing programs into creating a language that helps a lot with some of these things.

Stuff that I particularly like.

Assert/Enforce: http://ddili.org/ders/d.en/assert.html , Unit testing: http://ddili.org/ders/d.en/unit_testing.html , Contract programming help: http://ddili.org/ders/d.en/contracts.html, and http://ddili.org/ders/d.en/invariant.html , Scope (love this): http://ddili.org/ders/d.en/scope.html

And the other usuals -- templates, mixins, etc....

BTW, I mostly use D as a better C than as a better C++...


Good list.

Another good feature:

Interfacing to C is somewhat easy at least for the basics. (I haven't looked at advanced cases yet, but maybe someone else can comment on that.)

https://dlang.org/spec/interfaceToC.html

This is a great feature IMO, because it allows you to (re)use the huge number of existing C libraries out there.

Here is a simple example:

Calling a simple C function from D - strcmp:

https://jugad2.blogspot.in/2016/09/calling-simple-c-function...

There's a handful more small examples of a few things D can easily do, here on my blog:

https://jugad2.blogspot.in/search/label/DLang

(And of course, there are many more at the DLang tour site - https://tour.dlang.org/ )

Also, there are a few good videos about D (alone) and being discussed along with other languages, at my blog's DLang posts link above.

I'll just put some post titles here to give a quick idea:

Simple parallel processing in D with std.parallelism

Using std.datetime.StopWatch to time sections of D code

Read from CSV with D, write to PDF with Python

Command line D utility - find files matching a pattern under a directory

min_fgrep: minimal fgrep command in D

num_cores: find number of cores in your PC's processor

Func-y D + Python pipeline to generate PDF

file_sizes utility in D: print sizes of all files under a directory tree

deltildefiles: D language utility to recursively delete vim backup files

[DLang]: A simple file download utility in D

Getting CPU info with D (the D language)


Just added another D post to that list:

Porting the text pager from Python to D (DLang)

https://jugad2.blogspot.in/2017/04/porting-text-pager-from-p...


Except for perhaps Lisp languages, almost no language makes compile-time computing and code-generation so easy. This allows for some really powerful language features that can be designed as libraries, and puts this power in the hands of "regular developers" rather than only in the hands of template wizards.


Yeah, I really love that at compile time you can meta-program in almost the same language that you use for normal run-time things. I think the comparison to lisp is apt. It almost feels like lisp macros.

Think C++ template metaprogramming but much, much easier and thus, seemingly more powerful. It's not that you couldn't do the same in C++, but D's metaprogramming is so much more accessible that it makes you want to use it.


There's plenty you can do with D but not C++, see regex, bitfields, swap member-by-member with checks for mutual pointers, the pegged library for grammars, etc etc.


I don't know, Boost has me convinced that even things like regexes in C++ templates could be possible, if your compiler has a big enough stack to handle very deep template recursions. It's theoretically possible to build a regex parser in C++ templates, right? It would be horrible, but possible.


With D it is possible and not horrible and already done. ;)

Talk: http://dconf.org/2014/talks/olshansky.html


> It's theoretically possible to build a regex parser in C++ templates, right?

You'd probably do large portions of it in "constexpr" types and functions rather than relying on recursive metaprogramming techniques.

They are also planning on adding overloading based on whether a function is constexpr, which is important if you want the same library to support both compile-time and run-time matchers.

So it's all theoretically possible already. But D has the advantage here in that it's not just possible but usable.


For someone who knows zero about D, but is a total Pythonista, how would you compare the meta-programming facilities?


I guess the best comparison to Python would be the dunder methods. You know how in Python a class really is just a dict, right? Like, foo.bar is pretty much syntactic sugar for foo.__get_attribute__(foo.__dict__['bar']) in most cases. Or how a + b is sugar for a.__plus__(b).

Well, imagine that all of the things you can do in Python by messing with dunder methods, function decorators, or metaclasses could be precomputed by a compiler and would not even execute at all during runtime. It makes everything really fast at runtime and easy to precompute at compile time.


Yes, and plus you get type safety with all of that! When I'm writing in D, my editor marks all the type errors (and typos) in my code, almost as fast as I make them. The DMD compiler is so quick that there's almost no lag. This leads not only to a rapid development cycle, but a much higher level of reassurance (than I've had when writing, e.g., metaclass hacks in Python).


Well, overriding dunder methods isn't really metaprogramming. It's just method inheritance/override.

Is there anything in D similar to decorators? How does the registration pattern work in D, for instance?


Do you mean like this kind of registration decorator?

https://github.com/rejectedsoftware/vibe.d/blob/master/examp...

Decorators (in D, 'user defined attributes', or UDA) work differently. It's not a function-composition feature, but more of a tagging feature. You write a class, function, etc., tag it with custom attributes; and then have a separate compile-time function walk over your code, find the decorators, and augment the code based the meaning of the tags. (I used to wish that D had adopted Python-style decorators, since they are easy to reason about and implement, but I can see the logic of the more general UDA system that they adopted.)

In practice, though, you would often use templates to achieve the same effect. Given a memoize template (really, just a memoize function), and an expensive-computation function,

    auto fastComputer = memoize!expensiveComputer;
produces roughly the equivalent of

    @memoize
    def expensiveComputer(): ...
but with opportunities for compile-time optimization.


Anybody out there that has experience with both Nim and D? I am curious how they both compare in terms of metaprogramming.


I've dabbled with Nim. It's also a great language with great metaprogramming features. Plus a fast compiler that generates very lean code. (The real reason I stick with D over Nim, above all else, is RAII -- once you have it, it's really hard to live without. Okay, that, and also ranges, and array/string slices... they are fantastic to work with. Nim's got a better GC story right now, IMO, but there's a lot of activity in the D community to become more competitive in that space.)

I found that some of the Nim's MP features, in particular AST macros, are a little harder to work with than D's templates and compile-time function evaluation. You get a lot of flexibility, but the cost is high. I'm not a big fan of how "mixin" is used in D to splice source-text into a generated function, it's certainly less principled than an AST transformation, but in practice the resulting code tends to be concise and easily read, whereas the AST-macro approach introduces a lot of accidental "noise" and complexity. The complexity raises the bar for reaching for a compile-time solution, where in D it seems equally as natural to write compile-time code as it does to write regular code. (Not to pick on a strawman, though -- AST macros are not Nim's only MP tool.)

The Nim MP feature I never really played with was the rewrite rules. They seem interesting in theory, but maybe a little too magical for my liking!

I have a lot of respect for Nim. I am glad we live in a world with so many options. Like Walter said in an earlier comment, it's an embarrassment of riches. :)


Thanks for the writeup. I've just been evaluating Rust, D, and Nim, and reading about your experience has been helpful.

Can you elaborate on how D is planning to sort out the garbage collection? Nim's GC is extremely fast and thread local, and can be disabled without breaking libraries (according to the author, it does something with memory regions that I haven't 100% grasped yet).

I've googled about D's garbage collector and it's apparently been discussed as the language's biggest flaw since 2013, but I can't find any information whatsoever on what's being done in that regard.


Like Nim, you can also disable GC in D, and you can avoid GC altogether by not allocating GC'd memory (GC is only ever triggered at allocation time), or at least by preallocating and then disabling the collector. Raw allocation is always possible, and you can "emplace" D structs and classes into unmanaged memory.

The collector itself isn't being improved as far as I know -- at least I haven't seen any initiatives mentioned recently with that goal. In fairness, I haven't been following the community activity very closely in the past few months, but I think that's accurate. Nim definitely has a technical advantage re: its GC implementation.

The bigger movement has been the "@nogc initiative", which started with adding a @nogc attribute to the language (the compiler can verify that a function tagged with @nogc, and all of its callees, do not allocate GC memory). There is an ongoing initiative to make more of the Phobos library @nogc-compliant, to take advantage of this feature. There has also been a lot of work on custom memory-allocators [1], and I think the plan is to incorporate into Phobos where it makes sense, so you can have functions which take custom allocators, have thread-local allocators, etc.

https://dlang.org/phobos/std_experimental_allocator.html

I don't speak for the community or the dev team, but I think the long-term goal is to make the GC a feature that is available when you want it, but that isn't a dependency for using the standard library. Either through custom allocators or through @nogc guarantees, you'll be able to ensure that your program's memory management is deterministic.


Thanks for the info. It's pretty hard to get clear, updated info on these languages as they have yet to gain that much traction (and they tend not to be backed by big entities that have a PR budget).

For anyone else who's evaluating D and looking into its GC situation, the most recent blog post on Dlang.org (https://dlang.org/blog/2017/03/20/dont-fear-the-reaper/) seems to embrace the presence of the GC, but also ends by saying the next blog post will describe how to go without the GC. So there is indeed awareness/activity on that front!


There has been work done to improve the collector but nothing has made it into mainline. For example one implementation was missing parts for Windows but worked in Linux thus it couldn't be pulled in and used.


If you want to work with C code, it is an excellent choice. I use it for numerical computing. It's an easy language to learn, no need to worry about memory management if you don't want/need to, generally good syntax, nice compile time features. Overall the best "better C" in my opinion.


See bachmeier work on D to R integration.


Also, how does D compare to Rust?

(If I'm going to learn a new system programming language, which one should I pick?)


My early impression with Rust is that it's more ambitious than it is capable of delivering. Lifetime annotations in particular are incredibly ugly. It's the kind of thing that makes me think: "...ok, so this is why other languages don't simply do static lifetime checks" and I guess I expected Rust to be the solution for that problem; it is instead not a solution, but the deliberate decision: "let's do static borrow checking" and that was at odds with what I expected from it.

In short, I thought Rust was "we solved the pain of this difficult thing." However, it is not really a novel solution to the difficult thing, but simply a decision to undertake it.

I've now moved to evaluating Nim and D, and while I've barely dabbled in them (mostly going through the docs and writing some simple example-like code), I can at least say that Rust is likely to be much more controversial and polarizing than these languages. I recommend giving it a try - go through the official "book" which serves as the main documentation. If you survive lifetime annotations without finding them too obnoxious, then you'll probably like Rust.

Personally, I find Rust's extreme rigorousness (and the laboriousness that follows from that) makes it very niche, and I find myself rather unexcited about using it for anything.


Note that the vast majority of lifetimes in Rust are elided so that no explicit annotation is required. When the compiler can't figure out the annotations, or if you're declaring a type that contains a borrow, then you need annotations.

The annotations have, for the most part, faded into the background for me. I don't find them to be particularly pervasive in the code I write, and when I do need them, it's usually to fix a reasonably straight-forward case where the compiler failed to elide them. With that said, in the years I've been using Rust, I have committed one or two lifetime-related bugs. (No memory unsafety resulted, of course, but it did result in data being annotated with a shorter-than-actual lifetime.)

> In short, I thought Rust was "we solved the pain of this difficult thing."

Part of the pain that Rust purports to solve is the significant reduction (and debugging thereof) of memory unsafety errors. In return, you must deal with the pain of an ownership and borrowing system, which might present challenges to patterns you may have used in another language. The benefit is that you have a compiler to tell you when you've mis-stepped instead of an end user filing a bug report (or worse).


I agree with your characterization of the benefits and tradeoffs of Rust. I think that regardless of whether it solves the pain point, at the very least there needs to exist something that undertakes static lifetime checking. I respect Rust from a distance. It's not in line with my particular needs and/or expectations, but I'm glad it exists, because something needs to exist which does what Rust does, and I'm not aware of anything else that does it.

Other than lifetime annotations, I have a couple other niggling issues with it but I suspect it's because I haven't fully grasped certain things yet, and so I'll refrain from commenting on them for the moment.


D has been around longer, has powerful compile-time metaprogramming tools, and has a very fast compiler. However, it has a garbage collector, and is not entirely memory-safe by default (though is beginning to optionally incorporate some ideas from Rust [edit: or not], maybe making that easier once you put in the work). It seems to follow C or C++ in what sorts of things it makes language-level features.

Rust is newer (though being used in some big projects like Firefox/Servo, Visual Studio Code's search functionality now uses it by default, etc), has (afaiu) a more powerful type system, and is entirely memory-safe by default. However, this makes it somewhat harder to learn at first, and it also has a relatively slow compiler. It follows ML-style languages a bit more, with things like pattern matching and sum types built into the language.

Personally, I prefer Rust, because it feels like a stronger foundation with the ML-like type system and no GC. YMMV though, depending on what is important to you.


D's memory safe notions are very different from Rust's.


Could you be more specific? It doesn't seem like there's much room in the conventional definition of memory safety (e.g. https://en.wikipedia.org/wiki/Memory_safety ) for variation, other than, I suppose, putting conditions on when/which things are actually guaranteed.



Hm, I'm not sure I understand how that's answering my question. Neither of those seem to even define what the "memory safety" notion means for D so I can't compare. And, looking into it further, none of the links from https://dlang.org/spec/memory-safe-d.html define (or compare) what D means by "memory safety"/if it differs to the conventional definition.

I guess your original comment was maybe meaning how D enforces (the "normal" definition of) memory safety differs to Rust? In any case, I read over both of those, and, to me, they both seem to essentially be a slightly less general version of Rust's scheme (possibly independently invented), rather than something very different. I'm interested to hear how you think they specifically differ to Rust.


Memory safety in D is the usual definition - making memory corruption impossible.

There's no notion of "borrowing" in D nor any notion of "only one mutable access at a time".


By not having borrowing, do you mean the compiler doesn't stop one from mutating something that has a dependent scoped pointer pointing into it? How does D avoid dangling pointers for that?


It has a GC, I think.


> a relatively slow compiler

Well… it's relatively slow in release mode, but C++ compilers can often be slower. GHC is always slower :D


For some reference: DMD (the reference D compiler this thread is about), which consists of about 160000 lines of D code and 100000 more lines of C++ code, compiles in a bit over 3 seconds on my machine.


Counting lines of code per second for C++ can be tricky, because the .h files get compiled over and over for each project. The DMD front end is compiled with one command, and each D source file is read/parsed/compiled only once.


If you plan to learn a systems language for systems reasons (embedded development, compilers, low-level interfaces, etc) then Rust is the pick. No GC and the option to have no runtime (or a very small one, by default).

In other words, if you're looking to replace/supplement your C use: go Rust. If you're looking to replace/supplement your C++ use: probably go D, with some caveats towards actual use.

Note: I like both languages, but do prefer Rust for my personal use cases.


> (or a very small one, by default).

Some details here... every language other than assembly languages has some amount of runtime. This is Rust's: https://github.com/rust-lang/rust/blob/master/src/libstd/rt.... (as you allude to, many people refer to this amount of runtime as "no runtime" since it's very, very small.)

You could also consider some other things as part of a runtime; for example, by default on most platforms, Rust includes jemalloc. That can be removed, though not in stable Rust. Same with the standard library code, which can be removed for libraries, but not binaries just yet (due to a small technicality regarding the stability of some attributes.) Things that do this are likely to use some of Rust's other unstable features at the moment, so it's not generally a burden to do so, and those interfaces rarely change right now.


> every language other than assembly languages has some amount of runtime

How are you defining "runtime" such that this statement is correct?


What language are you thinking of that I'm missing? C has crt.0, for example.


crt0 does things you would need to do manually in hand-coded assembly anyway. It is no different than manually creating a library of assembly subroutines that you inject into your code, at which point assembly technically has a runtime as well.

I see a distinction between a runtime library and a language runtime. The former is just a set of convenient routines for interacting with a specific platform. The latter is something that runs alongside your program to facilitate its execution, like a JIT or an interpreter.


> at which point assembly technically has a runtime as well.

The key is, you are doing the injecting in asm.

Anyway, your distinction is pretty solid. Most people just use "runtime" to mean either one, and then rely on the context to disambiguate.


Do you depend on the C runtime as well?


IIRC no; basically the same thing is done though, just not as a dependency.


D2, as a language, favourably. As an ecosystem, not favourably.

I REALLY ENJOYED D1, REALLY! (CAPS 11). D1 was like C, but better. It was heavenly. D2 is like C++, but better. Not my cup of tea though. Give it a spin for a day or two. I'm sure you'll like it if you like C++. Trouble arises, as with all niche languages, when you try to move to production and you have to source libraries and/or support.


Favorably. Pick D.


What kind of support problems did you have? I found the opposite. It was possible to get such a good level of programmer from the community I was better off having found them letting them go work for the D Foundation on rebuilding compile time function execution. Libraries - meaning a bit more work to have idiomatic style use, but if you write D as C, that's usually okay - and wrapping is a one time cost.

I would say that I have found it to be more of a problem on Windows. That'd because the general C/C++ package management solution on Windows is quite... Old fashioned, and that's not always a D specific problem.


Can you elaborate?


That is excellent news :)

Congratulations Walter, now let's see D take over the world.



Hehe. You've got my vote. Oh wait.


Anybody worked on performance critical stuff in D? How good is its GC?


I think it's common to avoid the GC in performance critical sections of your code. D's version of BLAS is faster[1] and avoids the GC completely.

[1] http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...


consistently beating OpenBLAS and having performance comparable with MKL is seriously impressive. I'm wondering how that is possible. I assume it's using the same algorithms as other implementations, so the credit goes to the compiler (and to the language to some extent). But the D compiler just uses LLVM for the back end, so is this because LDC produces really good LLVM IR, or is LLVM really good at generating assembly? Either way I'm really impressed they can beat hand-tuned assembly, I'd love to see a comparison between the resulting assembly for Mir GLAS and OpenBLAS.


LDC dev here. All the credit for this goes to Ilya Yaroshenko for the expertly crafted implementation and the LLVM developers for efficient low-level code generation (inlining, register allocation and so on, and in some places auto-vectorization). LDC only needs to make sure not to "mess up" things too much.

D does play a significant role in this achievement, though – D's very powerful yet easy to use features for generics and introspection make it possible to finely tune the code for different parameters (sizes/dimensions/…) while still being easy to understand and modify.


That's helpful, thanks!

I definitely appreciate the power of generics, especially when you can to generics over values instead of just types. But I'm having trouble seeing the value of the compile-time introspection, for the most part it doesn't seem to give you much more power than generics do. Could you give me an example of the "killer feature" introspection gives over generics? Bonus points if you can compare it to Rust-style generics with traits and specialization.


Many things, although introspection works glove in hand with generics. Below os a non-exhaustive list.

You can 'Write once - automate everywhere' all the boilerplate. See https://github.com/kaleidicassociates/excel-d/ , for an example of automating the interaction between D and Excel. I am in the process of doing something similar for OpenCL and CUDA and will be presenting it at Dconf.

You can make at compile time (additional) fast paths by checking to see if a type (or symbol or whatever else) provides a fast primitive to accelerate your algorithm. e.g. if Foo implements fastfoo use that otherwise fallback to a general algorithm.

See also Andrei Alexandrescu's 2016(15?) talk (IIRC the relevant section is about half way through https://www.youtube.com/watch?v=4oDK91E3VKs).


It's much easier to generate code adapted to that cpu and machine in D because of compile time features. Plus it was an excellent team of a few people working for a few years that built mir. Actually I am joking - mostly one guy working for a few months (with a little help) made mir Blas. A pretty nice example of productivity of D. Also an illustration of the high calibre of person found in the D community.


You can get some information from this comment[1] by the author as well as his others on that thread. This is somewhat outside of my area.

[1] https://www.reddit.com/r/programming/comments/54kg6v/numeric...


The performance of D is the same as that for the corresponding C/C++ program in the corresponding compiler.

    dmd and Digital Mars C++
    gdc and gcc
    ldc and clang


I meant the situations when the GC is a bottleneck, and how people deal with it, by fine tuning GC, etc... Any war stories about this...


Weka use D for making systems that store Petabytes of data in production. They don't really like the GC kicking in on that situation. So they write the first draft of code using the GC, and then for production make it not use GC - pre allocating largely. If you really care about latency I guess you can't really afford to use malloc necessarily.

But for the rest of us, D is not really comparable with Java but people tend to think if it the same way. I don't use classes myself (I had one but a guy didn't like it and removed it, though one or two in library code may have crept back recently) but allocate structs on the stack. The latter is more idiomatic generally in D. Depends how you count it, but at 120k sloc, maybe 200k if you include the periphery.

It's easy to allocate without GC using the std.experimental.allocator and emsi containers. Regional heaps, free lists, whatever hybrid model you want.

See excel-d for one example.

If you keep the rest of your heap small, say below 200Meg most people will be fine.

If you don't want to use D, blame the docs and lack of examples - still not as good there, but way better than before and all the unit tests are editable and runnable now. But I think the GC thing is more FUD than a real objection for most people.


People on embedded systems use a custom runtime, and people who want to avoid the GC use the compiler to disallow GC usage in their entire program by using the @nogc annotation.

There are other less extreme solutions, such as the ability to have some threads be registered with the GC and others not. Also, you can just find your bottle-necks and mark those specific functions as @nogc. Some people also turn off automatic collections and manually trigger them only when it's ok to pause.


Weka.io develop a software-defined distributed storage system in D with latencies well below a millisecond (over Ethernet). They use the GC in some parts, but avoid it on the hot path as much as possible as it is a simple pause-the-world mark-and-sweep collector.

All in all, the GC isn't as much of an issue for high-performance applications as is sometimes claimed, but all the recent work towards reducing the dependency on it has of course been done for a reason – in performance-critical code, you often don't want any allocations at all (GC or not), and the large jitter due to collections can be a problem in some situations.


Auburn sounds makes audio plugins in D and share how they avoid using GC in blog posts. Not that bad even for audio. Sociomantic and Weka both do soft real time.


Is there support for BigFloat in D/phobos or any auxiliary library? I was playing around with D sequences and wrote a D program that calculates a Fibonacci sequence (iterative) with overflow detection that upgrades reals to BigInts. I wanted to also use Binet's formula which requires sqrt(5) but it only works up to n=96 or so due to floating point precision loss.


Amazing! D has really been exciting me for the past couple of years. It has great potential.

Hopefully a fully FOSS compiler will bring it right into the mainstream.


Thanks a lot! I am also consistently amazed at the performance of the forum like any other day even though the story is on top of HN.


No matter what people say about the applicability of Dlang, Dlang has a very bright future in science. GIS, compilers, math environments will benefit by translating to D instead of C++. D is my language of choice for this stuff.


This is great news. I was using LDC because the DMD backend was proprietary. Thank you Walter, Andrei and whoever made this possible.


We LDC developers of course hope that you will continue to use LDC for all your non-x86 and high-performance needs. ;)


It produces smaller binaries and it has fast compilation times. So yes, I will continue using it on x86 as well.


It's really surprising that to this day, there are languages in use which have its reference implementation closed source. All the possible optimizations and collaboration possible when it's open is invaluable.


Has there been any new books out there to learn D? I have one that still references the Collection Wars (Phobos vs Native). Once I saw that, I put the book back on the shelf and stuck with Java.



This is big. Heard from mmany people that this avoids adoption.


Is anyone else surprised that it wasn't before?


I wanted to play around with D using the DMD compiler but it's unfortunate I have to install VS2013 and the Windows SDK to work with 64-bit support in Windows. I've installed VS in the past and found it to be a bloated piece of software I'm not willing to do again.


what is it like to 'bootstrap' D? I know in many languages you can forego the standard library and 'bootstrap' yourself on small platforms (C being the main example).


Tremendous effort. Congrats.


This is an awesome news!


Please get rid of GC :(

I want to have smart pointers instead


No problem, just add the @nogc annotation to your main function.

    int main(string[] args) @nogc { ... }


I haven't used D much- how much of an impact does that have when using the standard library or other libraries? Is there a good way to use things that assume a GC even when it's disabled?


There are many parts of the std lib that assume GC usage. When you mark a function as @nogc, DMD does a compile-time check and will not compile your program if there's a GC call. Applying this to main means that your entire program will by necessity be GC free.


Right, I understand that it effectively eliminates GC usage, I'm just wondering how much of an impact that has on what tools you have. In the extreme, I can imagine it degenerating into basically writing C, for example.


You have access to all of std.range and std.algorithm, most of std.datetime, many functions in std.string and std.digest, and others scattered throughout the std lib. The main thing you'll want to look at is std.experimental.allocator, which has tons of tools for manual memory allocation.

Most of the community packages out there use the GC. Some though are transitioning to a std.experimental.allocator interface, like this containers library https://github.com/economicmodeling/containers


See Sociomantic 's Ocean library. But you don't need to go that far in practice in most uses. If you keep GC heap small, disable the GC in critical code paths, have threads that the GC doesn't get involved in, and use allocators most people will be fine.


Actually, please don't get rid of the GC. The GC makes programming easier and there is rust now if a programmer believes that they can not possibly tolerate one. Complaining about the GC is like complaining about python's white space, lisp's parentheses, go's lack of generics etc. It's demanding that the entire language changes to suit the complainant. However, if you're going to have one and people keep bringing it up, you should work hard to make sure that it is the best that it can be. Golang has been very good in this regard with their very publicized work on reducing the stop the world time of their GC. Personally, I think that it would be a win if Dlang did something similar (in addition to the @nogc stuff). It doesn't need to concentrate on reducing the GC pause time, it just needs to be seen to be getting better.


Isn't it possible to have GC as an library rather than a language construct?

The way I see it, GC should just belong as deferred_ptr


Smart pointers are a form of garbage collection. (I'm assuming you mean reference counting).


Not all smart pointers are reference counted.


Sure, smart pointers can also refer to bounds checking, etc.

In this case, grandparent did mention a memory management technique, namely garbage collection.

Memory management + plus typically required thread safety usually means reference counting.


uniq_ptr<T> in C++ and Box<T> in Rust are both considered smart pointers, dealing with memory management, and don't do any reference counting, bounds checking, or anything else like that at runtime. They're a compile-time abstraction only.


This is not really accurate. A unique_ptr<T> move assignment must check the target location to see if it's null, and if not, free the underlying object. It also has to clear the source location. This is more than just a compile-time abstraction compared to raw pointers, even if modern compilers can elide the code most of the time.

Also, the lack of checking leads to unique_ptr<T> dereferences potentially having undefined behavior (i.e. whenever you try to dereference a unique pointer that is null [1]).

[1] http://en.cppreference.com/w/cpp/memory/unique_ptr/operator*


I think you're picking nits. The equivalent raw pointer code would also have to free a pointer it was overwriting, so I'm not sure that's so interesting (definitely not in terms of "zero cost abstractions").

That said, the null checks required for C++'s destructor semantics are definitely a little more interesting in this respect, given one is rarely going to do this with raw pointers (although the cases when you don't need to do it also seem like cases the optimiser can handle fairly easily), but an extra branch and/or write at least seems significantly qualitatively different to the full reference counting that people often think of when talking about "smart pointers", which is the point the parent is trying to make (smart pointers aren't just reference counting).


I'm not picking nits. This is not just about the freeing (which I assume will happen in the minority of cases), but also the test-and-branch code generated (plus the zeroing of the source location), which is something that you can't just ignore even if branch prediction is perfect. The claim that they are a "compile-time abstraction only" is pretty absolute.

Also, the cost of it not being memory-safe should not be ignored.


I was claiming you were picking nits because the main point of the comment is that smart pointer != reference counting, the incorrect "only" claim is somewhat orthogonal. And, my point about the qualitative difference between the runtime behaviour is definitely true (although I'll now strength it to a quantitative one: moving a unique_ptr doesn't require dereferencing it, unlike a reference counted one, and nor does it require any synchronisation, unlike thread-safe reference counting ala shared_ptr).

In any case, it is unfortunate that unique_ptr is not memory safe, but it also not at all notable, as pretty much nothing in C++ is completely memory safe (even the other smart pointers like shared_ptr).


This is why I wrote "this is not really accurate" as opposed to "this is wrong". The claim was considerably broader than "smart pointer != reference counting".

The deeper problem is that unique pointers essentially are a sort of linear/affine type system, but that C++ leaves the actual checks to the runtime (which implements some, but not all of them).


Ah! That makes perfect sense, thank you. I guess it's just Box<T>, then :p (This is the difference in move semantics between C++ and Rust showing.)

I think my original point still stands here though: smart pointers != refcounting, inherently.


Good point.

Somehow I wasn't considering unique_ptr<T> as a smart pointer, despite using it all the time for resource management to avoid writing wrapper classes.

I do now, you're right.


Off topic question; are you related to https://en.wikipedia.org/wiki/Jacques_Chirac ?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: