Hacker News new | comments | show | ask | jobs | submit login
Using D to Create the World’s Fastest File System (dlang.org)
250 points by aldacron 10 days ago | hide | past | web | favorite | 161 comments





I have wondered for awhile now why Rust is getting all this attention of being "Like C but safe and cool!", but no one would mention D.

I personally don't do systems programming because I'm not smart enough but the little bit I have done, I found D to be the most-natural feeling (even over Rust), and D has been around a lot longer.

I suppose timing is everything for these kinds of things.


The main reason the way I see it is that a lot of people who still use C and C++ need maximum speed, and D is only memory-safe with its garbage collector enabled. This affects latency and adds overhead. For many cases using a garbage collector is fine, but D was trying to target video games and other spaces where the GC was not acceptable. So D allows you to turn the GC off, but then it's just a vastly less-mature C++ with similar safety footguns and worse library support. Rust actually fills a space that wasn't filled by anything else - memory safety without a garbage collector. You can write pretty ergonomic code in Rust (similar to how code would look in a dynamic and/or GC'd language) just like D, but where in D you would have to choose between highest-possible efficiency and safety, Rust gives you both at the same time.

This isn't to say that Rust is better than D, but it is to say that Rust targets a gap in the market whereas D is trying to have the same tradeoffs as C++ but be better, which is nowhere near as compelling for most developers who've invested a lot of time into learning C++. C++ developers don't use D for the same reason that most people don't switch from qwerty to dvorak even though it's technically more efficient. It's not such a qualitative change that it's worth the retraining.


> D is only memory-safe with its garbage collector enabled.

This is technically correct, but not pragmatically correct. D doesn't have every memory unsafe operation plugged, but it's pretty darned close, and does have the major ones covered (like array bounds checking, alternatives to pointers, RAII, etc.).

> library support

D has excellent library support.


> (like array bounds checking, alternatives to pointers, RAII, etc.).

Yeah but C++ has all of that, too, as well as a vastly broader ecosystem in the space of non-GC'd, ultra-high-performance languages.


C++'s array bounds checking is there only if you use vector<>, not regular arrays.

And D's bounds checking is only there if you don't use pointers or @trusted.

fyi std::array also has bounds checking, you're not limited to vector<> to have that.


D will give you compilation errors in @safe mode if you try to index off of a pointer.

Is the D GC actually that bad? Do we have benchmarks or something to prove that the GC means D is dead in the water? I always hear about this hypothetical danger of GC but my D usage hasn't found it and my D programs typically match or outperform my C++ programs. I mostly use it for numerical code, so maybe that's why I'm not feeling the pain of the GC?

To address your final point, I find D vastly superior to C++. Immensely, tremendously, bigly superior to C++. It's so nice, it's so pleasant, it's so easy. It's not an incremental change, it's a whole new world of no segfaults, no template metaprogramming, beautiful error messages, beautiful compile-time calculations.


There is no such thing as a "good GC" in this context. Game engines go to great lengths to do things:

1) Have bounded memory usage 2) Have consistent frame times

These goals are critical to maintain a consistent framerate on fixed-memory devices (aka, consoles). GCs, by their very nature, are not compatible with either of those goals. They need lots of spare memory to go fast, and they cause hiccups when collection happens. They are a non-predictable, non-consistent load.

If you're not working on a problem that is a sustained, consistent workload that needs sustained, consistent result generation then no, you probably won't see the same GC problems. GCs work great when the work is bursty or when the work has no latency associated with it. There's an awful lot of programs that fit in that model. Games just aren't one of them.


How are GCs not compatible with bounded memory use? Though many GCs size allocation arenas proportionally to the live data, there is no fundamental reason why the allocation arenas couldn't be fixed-size (of course, they must be large enough for good performance, etc.).

And yet there are GCs in Unity and Unreal Engine, the most popular game engines of today.

Only the scripting code is GCed. The core engine in both cases is C++ with manual memory management for the reasons stated above.

On Unity's case with some subsystems in the process of being replaced by HPC#.

And in Unreal you can use GC in whatever components you feel like, it is a matter of weighting where it makes sense.


> On Unity's case with some subsystems in the process of being replaced by HPC#.

They are converting their C# code to HPC# which specifically does not do GC allocations.

https://youtu.be/NF6kcNS6U80?t=886

  HPC#:
   * no class types
   * no boxing
   * no GC allocation
   * not exceptions for control flow
They are building a compiler to eliminate the GC from the GC'd language they were using.

Once upon a time, C code for games engines looked like this.

    void do_stuff(void) {
        asm {
        }
    }
Hardly any C code in sight, and most of the stuff could equally have been coded in MASM/TASM with their higher level macros.

Eventually game devs migrated to proper C.

A couple of years later, C++ code in game engines, was not even "C with Classes", rather C compiled with C++ compiler.

Eventually game devs migrated to proper C++.

Looking at the past, which I was partially part of, I see HPC# as a stepping stone into regular C#, in a similar vein that C code of yore was full of inline Assembly.

It is of course a matter of opinion, however I like to look at the past to see where the future leads to.

So lets see who's is right in about 10 years, regarding how game engines get implemented.


Except they are stepping AWAY from C# here, not towards it.

And the game-industry as a whole is moving to data-oriented design, which tends to be hard to do at best (if not downright impossible) in most GC'd languages.

Regardless the transition from C to C++ has no performance cost (often a performance advatnage), so I'm not sure why you're bringing that up as some significant migration in this case? It doesn't support any argument for a transition from C++ to C# or similar, as that comes with a huge performance cost. Particularly in an era where CPU performance hasn't improved for almost a decade now.


Modern CPU performance gets explored by taking advantage of multi-core, and GC is a very helpful tool for writing lock-free data structures.

Additionally dataflow languages are better suited for GPGPU programming than plain old C and C++, as Google is exploring with XLA on Tensorflow, and game engines with graphical tooling for shader generation.

Again, lets come back in 10 years and see who was actually right.


Electron is a very popular application framework today.

On what concerns video games, all major engines do use GC on gameplay code, including variations of C++ GCs.

emphasis on gameplay code. And even there, if you target 60fps (sure a minority of developers), then GC is a liability at best.

But that is exactly the point. One should not constrain productivity just based on hypothetical goals, web scale and such similar arguments.

Doing the next Fortnite, Crysis or ground breaking AR/VR/Ray Tracing? Sure, every ms/byte counts.

Doing a typical Flash like casual game, game prototypes at Ludum Dare, participating at IGF? Having a GC around isn't the biggest concern.


> Doing a typical Flash like casual game, game prototypes at Ludum Dare, participating at IGF? Having a GC around isn't the biggest concern.

But for those cases, what would move you to use D instead of an entirely memory-safe language like Python or Lua?

To justify using D for games, you'd need to find a use-case where speed (e.g. targeting 60fps) is high-priority enough to be writing a lot of low-level code; and yet, where "having a CG around" still won't cause problems. Can you think of one?


Yes, because it compiles to native code and has relatively fast compile times.

Some AAA studios do use D instead of a C++ based script language exactly for that.


Only one studio, no more. And I hear there are some questions about this decision...

Actually GC can decrease overhead depending on your use of memory since it only frees when memory when it's needed.

> GC ... only frees when memory when it's needed.

This is actually one of the worst features of GC where game development is concerned. Memory usage isn't the main problem to be solved in game engines, it's consistency of execution time.

In that context, even if you're spending more resources on memory allocation/deallocation overall, it's much better to know that you can consistently fit that overhead into the 16 milliseconds you have for this frame rather than having it mediated by a system you do not have direct control over.

There are also other advantages to having more direct control over memory management than is typically allowed by garbage-collected languages. For instance, CPU cache-coherency can be a major performance concern when dealing with the type of computation required for games. Without being able to lay out memory and deal with memory in a precise way, there are whole categories of optimization which are not really possible.


"Overhead" is a term with many definitions.

Non-conservative GCs use a lot of extra metadata to be able to figure out what's a pointer and what isn't. Conserative GCs leak.

In both cases GCs cut into worse case latency.


> Conserative GCs leak.

Yes, but usually the leak is bounded [0], although I'm not aware they evaluated real-world gaming engines. But I guess frame jitter is the bigger problem there.

[0] https://www.hboehm.info/gc/bounds.html


D had a closed source reference compiler for a long time, and didn't have any major backers.

Rust had Mozilla backing early, and was fully open and transparent in terms of community feedback, and wasn't just making a language that had high level features, but was pioneering a new category of language that added tons of safety guarantees at compile time by default using a new paradigm of enforcing borrowing and ownership.

These safety guarantees are not just in regards to memory safety, but also in regards to data races.

D seems to allow memory safety guarantees, but not by default, and I believe it is enforced at runtime not compiletime.

So Rust was at once more accessible and more exciting.


D's only runtime memory safety feature is array bounds checking. (The same as Rust, I believe.) The rest is compile time.

D's compile time memory safety focusses on:

1. @safe and @system code

2. disallowing unsafe pointer operations

3. safe alternatives to pointers

4. controlling escaping of pointers

5. controlling the scope of addresses


Rust has other runtime memory safety checking, like RefCell.

D's compiler was never "closed source" if by that you mean "source not visible". It had a weird non-open source license for a long time, but the source has always been visible.

The weird license did use to scare me away, but that's very much a thing of the past now and now it has a standard free license.

https://news.ycombinator.com/item?id=14060846

I don't think D's features are any less exciting than Rust's, but Rust has had a lot of RESF backed by Mozilla.


Source code being visible doesn't make it open source. It had a personal-use-only license, which isn't a valid OSSI license (which is what makes something open source).

That's why I qualified it with "if that's what you mean by closed source". I don't know if all source is either open or closed and anything that isn't OSI-approved is closed. I don't think that's what people usually mean by "closed source".

I also freely granted that dmd's ad-hoc license wasn't open source despite having visible source.


> D had a closed source reference compiler for a long time, and didn't have any major backers.

In addition, wasn't there for a while two separate and incomparable versions of the standard library that fragmented the community?


That was 9 years ago and the second is no longer maintained (for about 2 years at this point).

That's fair, I suspected that people who knew more about this stuff had a reason.

Compiling slower than C++ is not exciting.

just to be clear, p0nce is referring to rust's compile time. D is known to have very fast compile times (unless you are doing very template heavy meta programming..)

Rust brought something new that no other language had: memory safety through statically enforcing object ownership and borrowing. Anyone who has done manual memory management already is familiar with those concepts, and how difficult that can be to track in large systems. Whenever a language is able to abstract pervasive concepts to first-class entities in a language, it becomes attractive to people who deal with those concepts on a regular basis.

D, from what I am aware of, did not offer any new abstractions. Rather, it was an attempt to do all of the existing abstractions in a better way. Which is laudable, but it can be hard to convince people to switch to a language when it offers only iterative improvements.


I'm excited for Rust because it might be finally time for a better safer embedded language (no GC is basically required here) and I like using it for little tools where I process huge text files.

Zero copy (in a safe way) makes everything super high speed.


We are working on improving scope atm which is quite an improvement. Tieing exceptions into it too to remove allocations.

> I have wondered for awhile now why Rust is getting all this attention of being "Like C but safe and cool!", but no one would mention D.

D is still build around a GC so it doesn't really fit into a discussion about GC free languages. Unless you want to focus on its "better C" subset, which I think puts it at a disadvantage compared to any language not fundamentally designed around the presence of a GC.


Nah, -betterC gives you an advantage over C. It isn't meant to give you an advantage over languages like C++. Its in the name after all ;)

You can use -betterC and link with C++ code, too.

D is in an awkward spot. C++/Rust folks dislike it for using a GC. Java/C#/Javascript folks dislike it for being too C++ like with pointers, templates and all. The people who use it don't care about language wars and just enjoy their tool.

I can identify several traits that helped Rust be more marketable over D.

First is that Rust is making tooling a central issue they're working on. Right now you can install VSCode and get a Rust plugin with a single click or get a VisualStudio plugin with a single click, and that gets you intellisense right away. I'm pretty sure that Rust's editor tooling has reached and surpassed D's in a comparatively tiny timeframe.

Second, Rust comes with an opinionated build system and module system out of the box, while D just has a compiler, you need to figure out building yourself and use a separate package manager. This is something you would be very comfortable with if you're a C/C++ programmer, but not so if you're using pretty much any other mainstream language. These days it's expected that the language comes with a compiler, build system and package manager.

Third, Rust is more novel, thus more interesting. It's a new language combining new programming paradigms in a novel way. Go has channels and goroutines, Rust has the borrow checker and zero overhead compile-time memory safety. D has... it's trying to be a better C++. It's succeeding very well, but its features aren't exciting.

In conclusion, Rust has come out the gate with great marketing, with novel and interesting features not seen before in a system programming language, and with great tooling support for the current generation of developers who don't want to bother with learning arcane Makefiles (or CMakefiles as the case may be today). D is just a better C++ with the same friction to use as the language it's trying to replace, but without the top-notch tooling that exists for C++. Rust on the other hand also doesn't have tooling quite as good as C++, but it's not replacing C++ in existing projects, it's used for new projects.

One case where D significantly outpaces Rust is in its ability to interface with C++. This gives it a significant edge in being considered for use in C++ projects, and I would expect that there are many cases of mixed D and C++ usage among the D success stories. Rust's lack of C++-compatibility story (I will be interested to hear how Mozilla integrate it with their C++ code!) is really what's holding the language back for my use-cases. The C ABI may be the lingua franca of low-level code, but a significant amount of native code is written in C++ and I need to interface with it natively via the platform's dominant C++ ABI. D would really shine in mixed C++ projects, but it really needs a turnkey solution for intellisense integration, i.e. better tooling. It would also need seamless debugger integration, letting me trivially step through D, C++ and C code in a mixed setting, and letting me place breakpoints, watch memory, etc.


> (I will be interested to hear how Mozilla integrate it with their C++ code!)

There was a good blog post about this the other day: https://news.ycombinator.com/item?id=18588543


The Rust community is extremely hostile, especially against other languages, and D in particular.

I think I know what you are talking about, there does seem to be a general sense of elitism in the Rust community. Kind of a thought that Rust is the best language for almost any task, why would you use anything else? I think that thought process is present in many software communities however.

I have found the Rust community to be one of the most helpful and welcoming to beginners, and I don't believe this to be contradictory of what I have mentioned before. Rust developers want their community to grow so to spread the good word. Maybe it comes off too strong sometimes by pointing out where Rust has strengths against other languages, but if that is the among worst the community does, I would consider the community pretty friendly.


Really? The few times I've used Rust, the people on the IRC channel were really nice to me and helped me get bootstrapped.

Its not hostile. Its aggressive in selling the strong points of the language. Having said that, I do wish newer languages spring up that adapt the good stuff from Rust. I can never like Rust syntax.

A ludicrous amount of engineering has gone into the Rust compiler, the extensive documentation in RFCs about its design, the huge language docs that took a million man hours to make, RLS is another huge undertaking...

I do forsee Rust getting a Coffeescript of its own. I'm in love with the control flow model of the language but always felt some things that were in from day one (like double colon :: namespace delimiters, ampersand references, asterisk pointers and dereferencing, semicolons, curly braces, etc) were just wholesale copied from the languages de jour of the day because the focus was on the borrow checker as an innovative feature. So the semantics and ergonomics were completely phoned in early on, and by the time the modern development process around the language and compiler matured the glyphs were so totally entrenched there was no way to reconsider any of it.

A syntactic wrapper that gives you the types and behaviors of Rust with beautification like whitespace significance over mandatory brace delimiters and semicolons (like Python) with single colon (or maybe even dot) namespacing and such would be an insanely hard project (because parsing Rust is already insanely hard, and a lot of work went into minimizing compiler passes to parse grammar so keeping that with less control characters to work with in an intuitive way sounds like a substantial exercise in ergonomics) that compiled to ugly normal Rust would be fantastic to help newbies avoid the "glyphic overload" I easily see happen to anyone I try to preach Rust to.


I totally agree re the ergonomics. It really feels to me as if a lot of C/C++ conventions were copied over, with no justification beyond "this is what system programming languages look like".

It would be ironic if D language did precisely that.

Where do you see that happening? I'd love to tell some people to cut it out!

In this post about D, the top comments are about how great Rust is. But mentioning D in a post about Rust is like sticking your head into a bee-hive.

The top comment is about how much they love D, and wonder why others don’t. So people are answering that question.

I don’t see how that’s the Rust community hating other languages, but answering a question about preference. They’re all respectful answers.

I don’t see it here, sorry :/


Not at all, they even put up with my schizophrenic arguments of C++ vs Rust vs whatever.

In my experience the rust community has been incredibly welcoming and nice.

The Rust Community is:

    Welcoming
    Inclusive
    Developers prioritize mentorship
    RFC process
    Energizing

It's written in the Rust Marketing Handbook so it's true: https://rust-lang.github.io/rust-marketing/pitches/community...

Because there is no big company behind D. Rust isn't good as boasted. But firefox and HN willing to blow the bubble. That's their company's strategy.

> Rust isn't good as boasted

What do you mean? What's being boasted about Rust and what makes it untrue?


I would have liked to hear more about what exactly made D so great -- reading through it it's kind of vague. He mentions generics and finally getting LLVM support working as a target, and being able to use D at a high and low-level but I don't see anything here that's groundbreaking.

Could someone who uses D

Hate to be that guy, but if you could compare it with rust that would be double A++ good... I know the targets are somewhat different but I find that I can write high level rust and low level rust. Also, I get the feeling that you could very easily have used Go for the same purposes (D was around long before Go IIRC so maybe it was just not being present).


My experience, which of course is anecdotal, is that the advantage is it's easy to change the algorithms and data structures.

I've maintained C and C++ code bases for decades, and I've found that the first algorithm I tried has stayed there in the code. It gets tweaked, optimized, refactored, but it's the same algorithm and data structure.

With D, when I developed the Warp preprocessor, https://github.com/facebookarchive/warp I continually tried different algorithms & data structures to see which was faster.

One reason it's easier is that in C/C++ the . is used for value field access, while -> for pointer field access. In D, . is used for both. So you can easily switch between a value and a pointer to the value by changing little code.


> In D, . is used for both.

Yes, this is a great feature. Rust also does this, and I attribute substantial portion of ease of refactoring Rust to this feature.


Oddly enough, so does Cython.

> In D, . is used for both

This is an absolute pleasure to work with. Turtles (.) all the way down (pointers, values, modules, etc)!


Go also does this, although I never appreciated how useful it is until you made that point. Thanks for that. Also thanks for writing D, it was great to read through it's documentation, and I've been meaning to play around with it.

Thanks, this was a good explanation of where the likely benefits originate for the use case in the OP!

> In D, . is used for both

Also for module access (as opposed to :: in c++). I feel like d has a lot (too much?) of syntax but this is one place where it manages to cut down on it a bit.


D has significantly less syntax than C++, it barely has more than C. The weirdest syntax to get used to in my experience is the template stuff, most everything else is fairly natural. and even then the template stuff is way less complex (but no less powerful) than C++.

That would be nice. I wonder if C would ever consider adding that. Types are tracked by the compiler so it doesn't seem theoretically impossible.

D feels much more like a 'sane' C successor, dropped from a parallel dimension where C++ could evolve without caring much about breaking its legacy code running the world. It has a few warts, some D v1 leftovers, but overall feels much more coherent than modern C++; even as the latter has grown much closer to D by piling more on top of its templating system.

What I personally found the stand out feature of D to be, is its compile-time template/macro system. D was designed by a compiler builder, and it shows. Its compile-time meta-programming (generics/templates) was built on compiler reflection, whereas C++ did it by abusing template instantiation. I bet every modern C++ programmer that writes his first "if __traits(compiles, S.s1) ..." gets a huge grin on his face, remembering the C++ SNIFAE abuse needed. A crude analogy would be working with Lisp macros after years of C preprocessor hacking.

In short, I would recommend D for projects where you want better rapid prototyping and feature turn-around than C++, but the end result needs still to be C-style speed.


If I may be blunt, I could hardly believe SFINAE was a feature, and hated implementing it.

But one thing I liked about C++ was the partial ordering scheme for template selection. I liked it so much D uses it for both templates and function overload selection (as opposed to the multi-level argument matching scheme C++ uses). Once I figured it out, I thought it was a beauty.


Thank you for keeping a unified dispatch/selection algorithm in D! I feel that in language design, simplifying/streamlining the programmer's mental model is often undervalued compared to "but you would expect X to happen in this situation". Case in point, operator precedence hierarchy compared to uniform application a la Lisp and Smalltalk. I personally gladly accept some up-front weirdness if it gives me long-term uniformity.

PS. I also had a pleasant time when I finally grokked Lisp's CLOS generic method partial ordering. It even gives you the ordering on demand as part of the meta-object protocol: https://clisp.sourceforge.io/impnotes/mop-gf.html#gf-argumen...


Do you mind giving an example for the specific use of "c++ partial ordering scheme for template selection"?

https://github.com/MicrosoftDocs/cpp-docs/blob/master/docs/c...

> Multiple function templates that match the argument list of a function call can be available. C++ defines a partial ordering of function templates to specify which function should be called. The ordering is partial because there can be some templates that are considered equally specialized.

> The compiler chooses the most specialized template function available from the possible matches.

And it goes on to give an example.


Essentially, it is the algorithm by which the C++ compiler selects what template specialization to use at your 'call' site. Function templates can be overloaded and essentially create an entire family of specializations. As a C++ programmer, you either modify+compile+run until you get your desired call/specialization; or you take the plunge once your start to heavily rely on meta-template magic and try to grok the ordering scheme defined by the standard.

Crucially, this is completely different from say the way function overload resolution happens in C++. Fun stuff happens in your brain when you try combining different types of selection/overloading (ever tried combining template and regular function overloading on a templated class method?).

Compare ADL (https://en.cppreference.com/w/cpp/language/adl), overload resolution (https://en.cppreference.com/w/cpp/language/overload_resoluti...) and function template overloading (https://en.cppreference.com/w/cpp/language/function_template...)


I like D, but I'd draw different conclusions. D 1.0 was kind of like a "native C#" for me - a language that combines performance, pointers, direct access and ease of garbage collection, batteries included standard library. I loved it. D 2.0 though, I have more mixed feelings about.

It's trying to be a better C++, but in an effort to woo C++ programmers it's slowly becoming C++ itself. D's curse and blessing is that it tries to cover all bases. Safe programming? Kind of supported. OOP programming? Kind of supported. Functional programming? Kind of supported. GC? Kind of supported. No GC? Kind of supported. Everything is supported to some extent, but it takes some effort to learn what exactly is supported in which subset, what language features don't interact well with each other. It's a big boost for expert D programmers, but for less expert ones it's a lot of mental overhead to worry about.


If a function is pure, it is automatically inferred by the compiler and it will be marked for purity (sorry for the lack of words). We don't have to annotate with pure attribute. Also D offers the right balance of functional programming that I can think of. Can you elaborate on what's missing from that point?

Being able to mark a function as pure means the compiler will check it for you - it's a guarantee. Inferring purity is useful, but it is not visible if it is actually pure or not.

Not to forget, C++ template meta-programming was discovered by accident. That's probably the only feature (and C++ is the only language) that I can think of in any language that's discovered by accident and is of utmost essential for introspection capabilities.

> Not to forget, C++ template meta-programming was discovered by accident.

yep, in 1993 some graybeard working on GCC checked files by last modification date to do some refactoring and saw one that hadn't changed since july 7, 1926, aptly called "alexandrescu.c". When he opened it, his computer started glowing yellow and his BSD prompt suddenly changed from '$' to '>'. He executed the testcase associated and the horror happened... a 666 kilobyte dump of template error was uploaded right into comp.lang.c++, which blew the poor hacker's 486DX away. Since then templates roam free again in our universe.

OR

when people say that it was discovered, it's that it was discovered during the process of designing templates and they found it nice, so they kept it.


Brilliant!

> Not to forget, C++ template meta-programming was discovered by accident

That's a bit of an uncharitable reading, no? For others that would like to make up their own mind: https://softwareengineering.stackexchange.com/questions/1253...


lisp was kind of a set of accidents too. remember Mexp ?

It's usually described as a C++ successor, rather than a C successor.

There was a C successor that's been in progress lately, saw it on HN a few months back, I forget the name of it.

EDIT: Was thinking of Zig


> It's usually described as a C++ successor, rather than a C successor.

It is, and I honestly don't see that at all. I interpret "successor" to be more than just an alternative. D is, more or less, a superset of C - there are some small differences, but you can do much of your port from C to D by simply changing the file extension from .c to .d. Heck, you can even include C header files directly in a D program. Although D does most of the same things as C++, it does them in very different ways.


D is a work of love by WalterBright and the community. I like to think of D as a true successor to C++ when people compare Go to a C++ alternative. It is between Rust and Go but it doesnt do away with Object Oriented programming or the many paradigmns that exist for different programming approaches. I usually tell people if you can grasp C, C++, C# or Java you are going to understand plenty of Go.

Rust has a larger learning curve, Go gains plenty of traction over this simple fact.

You hear it often enough "I wanted to learn Rust but Go was simpler to get into" but if those same people gave D a shot they might never look back. D is best understood once you start hacking away at D code because you will appreciate the beauty in D. Different people love different things about it. I love that it is compiled and dont get phased by the optional GC, it doesnt stop Java, C#, or Go from being awesome...


I think the optional part might have a bit to do with it. In Java, C#, Go you are 'stuck' with the garbage collector, so people learned to live with it and usually realized it's not as bad as people claim it to be.

There's a few other subtle things I like about D too, like the constructor[0] for a class is just this() {} instead of SuperLongClassNameFactoryShopPlaceThing () {} so instead of being redundant with typing, you realize the constructor can just have a much more obvious name such as "this", I know Python has __init__ and the @classmethod decorator for the case where you have to 'overload' the constructor. There's also 'auto' which is similar to 'var' in C#. If you are instantiating a class, why the hell do you need to write it twice, shouldn't the compiler know already the second you invoke the class constructor that your variable will be of that type? So 'auto' is nice too. These are seemingly minor things, but being a programmer who loves programming and programming languages I appreciate language aesthetics / syntatic sugar.

Also parallel compilation is my absolute favorite. When I found that compiler flag and used it I was amazed at how fast code compiles with the D compiler.

[0]: https://dlang.org/spec/class.html#constructors


Coincidentally I started reading on D last weekend. There are lots of things that are appealing to me (at least in theory since I haven't used them): UFCS, modules, pure, @safe, contracts, support for "D-scripts" using shebang, etc.

I just need a project and some time to play around with it, but my first impression as someone that worked with C++ and Python in the past is really positive.


Yeah I'm mostly coming from C# / Python so I absolutely love D. I'm not sure what kind of projects you're interested in working on, but if by chance they are web related (or wouldn't mind trying web development) I highly recommend Vibe.d[0] and DiamondMVC[1].

[0]: http://vibed.org/

[1]: https://diamondmvc.org/


Same happened with me and Oberon.

Using an OS written in a GC enabled systems language made me realize that not only it was viable, it was quite productive to work in such environments.

The problem is having a big name OS vendor willing to push it to the nameless crowds no matter what.

So far we have to get happy with half-hearted steps like Swift, ChromeOS/Android(Things) with heavily constrained NDK, .NET/UWP on Windows and maybe in a decade with more baby steps, we are able to reach what Oberon/Topaz/Spin/Inferno/Singularity/Midori had on their design.


I fully agree. I find myself always fighting the Rust or C compilers. Not so with D (or Go for that matter). The GC is very useful as one doesn't have to worry about memory allocation, while also being easy enough work around it by specifying static array lengths etc. Now one can also disable it using the @nogc attribute with nogc functions.

> I would have liked to hear more about what exactly made D so great

This WIP article explains very well the USP of D:

http://erdani.com/hopl2020-draft.pdf (section 3)

This article highlight the value of "design-by-introspection" applied to a particular problem space: https://blog.thecybershadow.net/2014/03/21/functional-image-...

This talk present a concrete example of such: https://www.youtube.com/watch?v=LIb3L4vKZ7U (mind bending stuff)

You might not understand what is new about it, but soon your competitors will understand it.


One of the nice aspects of D is that it interfaces with C++ really nicely. I'm not overly familiar with how Rust handles this, as the last time I tried to glue together Rust and C++ was more than a year ago, and the tooling was in evolution.

From Go / CGo, it can be pretty scary to interface with native C++ code, and it requires lots of annoying boilerplate.


"C++ has a very complex ABI, and the Rust ABI is not frozen. However, both C++ and Rust support functions that use the C ABI. Therefore, interoperability between C++ and Rust involves writing things in such a way that C++ sees Rust code as C code and Rust sees C++ code as C code."

https://hsivonen.fi/modern-cpp-in-rust/


Context: this poster is likely Walter Bright the creator and first implementer of D lang itself.

Cool to know you also read that article about C++ in rust (I personally read a little bit then left because trying to interface with C++ from rust wasn't a problem I had), but I guess it's obvious that you would.


> is likely

Probably 100%


I've written wrappers for C++ libraries to work with D and it's _very_ easy. To work with C++ classes in D, I created factory methods that created/destroyed them with C ABI and used/disposed them respectively.

Working with C is even more easier. There is no FFI, etc. Just use Dstep[1] and get going. Or use dpp[2].

[1] https://github.com/jacob-carlborg/dstep

[2] https://github.com/atilaneves/dpp


Up until quite recently, while the front end of D was written in D, the back end was in C++ (more precisely, C with classes, as it is fairly auld skool).

It's now finally pretty much all in D, and I've been piece by piece refactoring it into more idiomatic D.


Here's a comparison with Nim. In my experience Nim is one of the fastest both in terms of execution and compile time.

https://github.com/timotheecour/D_vs_nim


I see a lot of comparisons between unlike languages (Go and Rust for example) but it looks like D and Nim really are competing for the same niche.

And C# as well, via AOT compilation and the memory management improvements in 7.x versions.

It would be great if someone knowledgeable compared these contenders for about the same space "close to metal":

  * Rust;
  * D;
  * Ada/Spark;
  * Zig;
  
using C and C++ as reference points.

(I also wonder if ATS-lang is used by anyone in practice; it seems to be targeted approximately at the same space.)


Its reflection capabilities are as powerful aa Java if not more so and you can avoid doing so much at runtime using pure functions and templating.

For those commenting on the name: weka is not only the name of a storage company and some machine-learning software, but originally a flightless bird native to Aotearoa / New Zealand.

http://nzbirdsonline.org.nz/species/weka

Not to be confused with weta, which is a large scary looking (but actually pretty harmless) insect in the grasshopper family and also the name of another company that did most of the digital effects for the LotR movies.


Weta Digital (https://www.wetafx.co.nz/) are named after the grasshopper, not the bird.

> Our previous company, XIV (that IBM acquired in 2007) used a private version of C that had polymorphism, generic programming, and integrated RPC, for code that was a mixture of XML, C, and weird header files.

Super interesting. XIV was a great system, but "weird" is a good description for some of the source (IBM of course neglected/ruined it). Thought most of the XIV folk went to Infinidat.

In the storage world, C is still king, so nice to see a challenger. But some of the hurdles he describes in the article, ouch. No wonder people stick with un-sexy languages and toolchains.


I discovered D is a secret weapon at several companies, but not all of them want to make it widely known...

Several languages seem to share this curse. You do see companies advertise it on language conferences or websites: Jane Street for OCaml, Remedy Entertainment for D-lang, Naughty Dog and ITA for Common Lisp. But you are often left wondering if it is the tip if the iceberg, or an accurate reflection of industry use. In the case of CL, you see multiple commercial implementations still being supported, indicating much more use than one would suspect from grepping github projects.

A pattern I seem to observe is that if the program/framework/platform grows large and successful enough, the company gets acquired for its tech and everything gets rewritten in some more 'industry standard' language.

edit: As pointed out, Jane Street advertises its OCaml use, not Haskell. Oops!


Just a nitpick, but it's OCaml for Jane Street rather than Haskell.

Haskell is certainly used as a bit of a secret weapon in some of the Quant groups at banks like Barclays Capital though!


>ITA for Common Lisp.

>A pattern I seem to observe is that if the program/framework/platform grows large and successful enough, the company gets acquired for its tech and everything gets rewritten in some more 'industry standard' language.

I wonder if that happened for ITA's software after ITA was acquired by Google (for some hundreds of millions, IIRC).

I read somewhere that Paul Graham's ViaWeb, which was written in mainly Lisp (or at least some key parts were), was later rewritten in some other language, maybe C++, like the pattern you observe.


Probably it's not that easy to replace Lisp for something else at Google, since it implements a relatively complex search engine core. Thus it's still used. I guess they WOULD be happy to replace it, since Lisp is otherwise not used much at Google, AFAIK. ITA/Google also developed another complex application using Lisp (a reservation system for airlines, initially for Air Canada), but that wasn't successful in the market.

At Viaweb Lisp was used for an early web store maker (enables people to have their own webshop) implementation.

> company gets acquired for its tech

Actually companies get bought for a business model, customers and some amount of tech. Reimplementing earlier approaches, even multiple times, even multiple times with different technologies, seems to be normal in the web business.


>Probably it's not that easy to replace Lisp for something else at Google, since it implements a relatively complex search engine core. Thus it's still used.

Thanks, makes sense.

>At Viaweb Lisp was used for an early web store maker (enables people to have their own webshop) implementation.

Yes, I had read PG's article about it ("Beating the Averages"). IIRC the templating language they used in ViaWeb, which allowed customers to customize their stores, was a killer feature, and relied heavily on some Lisp language feature, maybe macros.


I’m nitpicking a bit, but Jane Street focuses on OCaml.

why is it in a company's best interest to not evangelize the publicly-available tools it uses?

Microsoft/Amazon/Google are paying $$$ to increase availability of AI/machine learning education, because they want a knowledge pool to hire from. Wouldn't this be the same with languages?


Two have told me they did so because D enabled them to write faster code than their competitors, and they were in a line of business where faster code meant making more money.

One reason D code is faster is array slicing, where you can refer to a subsection of an array. This pays off for things like strings. Consider the string:

    s = "/root/me/file.ext";
I want to extract the "file" as a string. In C/C++, because strings are 0-terminated, I have a cache hit, allocate 5 bytes and copy 5 characters and install a 0. In D,

     name = s[9 .. 13];
I have no cache hit, add 9 to a pointer, copy 5 to the length.

(A slice is equivalent to a (length, pointer) pair.)


>why is it in a company's best interest to not evangelize the publicly-available tools it uses?

Because they want people that are good with the tool, not people that crash-coursed/bootcamped their way into what they are told is a marketable skill.


>D is a secret weapon at several companies

That used to be said about Python back in the day, before it became as widely known and used as it is now.


The difference being with Python you gained in prototyping speed but were losing (at least at the time) on execution speed, whereas in D you get rapid prototyping while keeping very high efficiency.

Good point. But that is partly due to the design of Python, it being a highly dynamic language. I don't want to get into any flame wars, but I'd hazard a guess that Python is somewhat more dynamic in nature than D (I know some D, and Python well), and that at the least, some of the things one can do in Python, would require a lot more knowledge and skill (with D) and acrobatics or contortions (speaking loosely) to do in D. (It would be good to be proven wrong on this, though :) This is not a criticism of D, I really like the language, and am aware of its power. Check out these two posts about D by Dmitry Popov (lead developer and Director), on the Infognition blog:

http://www.infognition.com/blog/2014/why_d.html

http://www.infognition.com/blog/2014/d_as_scripting_language...

(2nd post not loading for me right now)

In fact, here are some posts I've written that show how to write some kinds of basic apps for various purposes in D, that I posted about on HN just recently:

https://news.ycombinator.com/item?id=18609543


Forgot to put the titles for the 2 Infognition posts linked above, here they are again, with titles now:

Why D?

http://www.infognition.com/blog/2014/why_d.html

D as a scripting language:

http://www.infognition.com/blog/2014/d_as_scripting_language...


Is this true? Mind speaking on which types of projects it was mostly used?


Remedy (game developer) uses it in-house, mostly for their animation and AI engines.

Congrats for Weka.IO. They managed to pull it through after the CEO visited the DConf and asked for help.

D is a brilliant language. I got drawn into D by this article[1]. I've used D to replace some of the test tools where Java/Python were used. GC is a friend in such cases. I could've written them in C++ as well, but Phobos wins over STL hands down!

Initially the "solution" test application for one of our REST services was written in Java. It covered the functional, performance and longevity tests. The REST service itself was written in C++. Ultimately the customers reported that under heavy load the response time of the REST service is too long (> 400 ms) and we fixed the code as well. But during tests we were unable to really load the application to the fullest. 48 cores & 48 threads in Java and we still couldn't overload the REST service. We had no way to prove that our fix actually worked. Ultimately, we rewrote the test app in D and boom, we were finally able to achieve the load that customers created in their production env - both in terms of response times and throughput. I could've written it in C++. But D's std.parallelism + template mixin is a boon, thanks to David Simcha. Based on our tests, the D test app was 20 times faster than the Java equivalent (written by one of the finest and the expert).

Sure, there is a lot of visual noise due to @nogc, etc, etc. But beneath that there is a beautiful and readable language that's waiting to unfold, if and only if the D authors realize that they stop the feature creep and revert to sane defaults.

On the other hand, I started to convert the REST service from C++ to D last year and it didn't go well. Blame me! There are _very_ rough edges in D as well that are result of over engineering. One such thing is the const'ness in D. It make it impossible to use const, ever, by library authors. I don't care to go and fight in D forum. Manu has already screamed many a times. Lately I found that one of the reasons for poor performance of my D port was also because of auto-decoding! Duh! It was not even GC. (The default should be fast, isn't it?). I spent a considerable amount of time for this port and eventually it failed to meet the performance vs maintainability trade-off. C++ is getting better year by year, at least every 5 years :-). std::vector, std::map, etc are safe for concurrent reads (at least on POSIX). std::vector has erase with iterator. Phobo's std.array doesn't have an equivalent features, albeit it is much more readable than STL.

FWIW, D doesn't deserve the negative publicity that it usually gets. It's a fantastic language. Try it and if you like it, enjoy using it for the right use cases. After all choose the right tool for the job at hand.

[1] https://wiki.dlang.org/Component_programming_with_ranges


The difference between const in D and const in C++ is that D's const is transitive, and C++'s is not.

Transitive const is necessary for things like pure functions. It's more difficult to use because the compiler really, really means it. But when it's there, you (the user) knows that function is not modifying the const data structure, not no-how, not no-way.


Could you elaborate a little on why D libraries cannot use const?

I'm sorry, these are very specific projects. A lot of creative work on the server side.

No it's not true!

D standard library is so well documented. Its criminally underrated.

Random Fact: I can't help but smile when I read that D's "STL" is called Phobos because of my Doom 2 nostalgia :)

Have a nice day, everyone o/


Well, Walter's company is Digital Mars so that's where he gets his naming scheme from. Nothing to do with Doom except they both are using the same Greco-Roman deity astronomical naming scheme.

Originally D was supposed to be named Mars. The stdlib is named Phobos and there is a project called Deimos which is about providing bindings to C libraries.

Apropos of D and Phobos, two posts from my blog:

Simple parallel processing in D with std.parallelism (it really did speed things up, ~3x (CPU, not I/O), by my unscientific and small test at least):

https://jugad2.blogspot.com/2016/12/simple-parallel-processi...

D language front-end merged with GCC 9; some sample D programs:

https://jugad2.blogspot.com/2018/10/d-language-front-end-mer...

Excerpt from the link above, that says what the post is about:

"Here are a handful of D programs that use various features of the language and some of its libraries, on my blog. Most of them are simple command-line utilities to do various things. Readers may find them of use to get a flavor of the language and to know some of the kinds of things it can be used for."


D's runtime is called Phobos because the language was originally named Mars.

I like how main() can still be found in mars.d[1].

1. https://github.com/dlang/dmd/blob/master/src/dmd/mars.d#L909


Perhaps an unfortunate name, though. For some reason my brain involuntarily auto-corrects "WekaIO" into "WeakIO". I think advertiser sometimes even play with this auto-correct feature of the brain to make suggestions without stating them.

Weka is a name of a Machine learning product from New Zealand.

The IBM system had 24xHGST N200 SSD, 830 000 random read, 200 000 random write. that is 24 * 200 000 IOPS=4.8Million random writes.

The Matrix system had 64x1.2TB Micron 9100 SSDs. 750 000 random read, 300 0000 random writes. 64 * 300 000=19.2 Million IOPS

Ok so you have benchmarked hardware with 4.8 Million write ops vs 19.2 Million write ops.


I clicked on the story in the hope that there was a New Zealand connection, and was disappointed that it seemed to be an Israeli operation.

It seems this article is not accurate anymore? (the reference is from March). It's not far off though:

https://www.theregister.co.uk/2018/11/30/wekaio/

https://www.hpcwire.com/off-the-wire/wekaio-places-in-top-fi...


Ah, just looking at this. When I was looking at D back around 2005 it was a closed shop run by one guy - I had the feeling it did not take off because it simply wasn't an open enough project.

It is a shame, as since then we've had go, Swift, Rust all come along and it is a lot more competitive for mindshare in that space... not to forget the merits of modern C++ (it is a far better language post 2011)


"it was a closed shop run by one guy"

You do realize Walter Bright is a frequent HN resident :) ?


I don't criticise him, I just think the structure of the project meant it failed to really launch. There are people like me (20y+ C++) that when certainly I think of D, I think of it being a closed project. I spent quite a bit of time looking at it and thinking how great it was, but it seemed confusingly proprietary (which is not a great place for a programming language to be.)

Sounds to me like they would be better off with Rust as they are actively avoiding GC. How are the two languages compare in this problem space?

You can avoid GC with a non-GC standard library that they actually wrote:

https://github.com/weka-io/mecca


Here's the thing though: They wrote a new compiler and a new standard library to make the language work for them. How exactly is that good publicity for D?

D itself looked and still looks very interesting to me, as it can be very performant, without having insanely ugly syntax such as c++ or rust (inb4 rUsT iS VeRy ReADaBlE).


THey didn't write a new compiler. They did have some work to make their codebase work on LDC, the LLVM-backend implementation of D.

Rewriting the standard library for performance is not unique to D. Game developers famously avoid stl/stdc++ (https://github.com/electronicarts/EASTL). The reasons behind that seems to be similar: optimizing memory allocation to their specific use case.


If D didn't have compelling advantages, they wouldn't have invested such effort.

Also, when one is really after every erg of performance, one often finds a need to work under the hood to improve it for one's special use cases.

For example, many, many, MANY C and C++ users replace malloc/free, and/or write their own custom allocator.


> D itself looked and still looks very interesting to me, as it can be […] without having insanely ugly syntax such as c++ or rust (inb4 rUsT iS VeRy ReADaBlE).

I did a quick dive into aforementioned stdlib only to stumble upon something I would rather prefer any Rust/C++ version over: https://github.com/weka-io/mecca/blob/f5dc6d9f71983ea7b852ad...

Some might want to exempt standard library from the "readable code" argument, but I'd argue that readability argument was lost the moment language designers allowed anyone to write things like `whatisthis!"whatever"({t.__ctor(args);})` (regardless of best practices that regular devs might unanimously follow). You can't just say that "A is bad 'cause it allows unreadable code" and follow that with "nobody's doing bad things in B therefore it's much better".


Metaprogramming generally involves complicated code like that. The choice is really simple: either the language allows it, and then some features can be implemented as a library (and we see the immediate benefit of that in this case, where the library can be rewritten to optimize it for some specific use-case), or else the language doesn't, and then those features have to be implemented as language constructs, or are omitted entirely.

that's just nitpicking. I bet you can find equally ugly examples in the other 2 languages. My point is that "standard" code is infinitely more readable than either c++ or rust.

> My point is that "standard" code is infinitely more readable than either c++ or rust.

Still I'm struggling to see any code that is significantly more readable because of D's design decisions, or details on what makes D more readable than $lang (especially compared to modern C++/Rust that you seem to despise so much).


1. Template syntax. For example, a struct template in D is:

    struct S(T) { ... }
A function template is:

    T func(T)(T t) { ... }
2. No forward reference declarations required. This cuts down on a mass of unnecessary boilerplate. Even better, it allows the coding style of ordering the functions from most important to least important, rather than the reverse that is typical of C/C++ source files.

3. Terser declarations:

    ulong x;
instead of:

    unsigned long long x;
4. No ugly #preprocessor code interspersed with your nicely formatted code.

5. Nested functions mean they can be nestled close to where they are used rather than much further away in the file.

6. None of that awful

    #ifndef __INCLUDED_FOO_H
    #define __INCLUDED_FOO_H
    ...
    #endif

I already find function template syntax (1) a bit more confusing because of consecutive parentheses (I can get used to it pretty easily, but it will always be a bit slower to read), and 2–6 does not make me want to convert to D from Rust.

It has some additional. Because so much code is generated on the fly during compilation, and because of frequent use of auto, IDEs have a hard time understanding the syntax to provide proper autocompletion :(

Most high-performance shops (Google, FB, Netflix, Amazon, ...) tweak their OS, compiler, etc.

You avoid GC in D in the same way you avoid GC in C++. If you are using D to do rapid prototyping or scripting, at least you have a GC that allows you not to think about it.

If you are on a project where performance is a feature, you already think about your memory management, no matter the language. Standard library have to be generic, and will unlikely match your requirements. That said, STL's allocation strategies are a good step in the right direction in this space.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: