Apparently it's being used at Facebook. If they actively injected life into the D ecosystem to the same extent Google does with Go, the momentum might pick up substantially. Look at how late Go arrived--it was announced in 2009--and how popular it already is. Now, that doesn't mean Go or D can necessarily become the next big thing, i.e. the C++/Java killer. But even if they got as far as Python or Ruby, that would be a significant change to the programming landscape.
AS C++ competitor, it seems logical that D would require more time to mature than a scripting language and even now-successful languages like Python have taken a long period before they gained traction and then popularity.
And Facebook is using it, that seems to imply some reasonable momentum.
Right! momentum is the most technically relevant quality of merit. Lets have a wholesome and fulfilling discussion of that aspect in particular. If that fails, lets discuss semicolons, and braces and make this HN story very meaningful and informative. After all that is the kind of discussion I come here for. If an article is crappy lets make the discussion around it crappier still. Shudder at the thought that a discussion could be stimulating even if based on an article less than stellar. Low and middle brow, here we come.
Ha! you are fun, searching through my comment history and downvoting old comments still downvotable. Whatever floats your boat.
Momentum means that things like tooling, libraries, documentation and frameworks are being built. Even if D is 'technically' better (whatever that means) than Go or Rust or whatever other language you want to compare it to, a language doesn't exist on its own, but inside a larger ecosystem. And the number of people using/contributing to/being employed to work on Go/Rust/Java/C#/Python/Ruby/etc. means that the D language's benefits over those other languages may not be enough to overcome the ecosystem advantages that come from "momentum."
But yes, instead of acknowledging that there's a valid point there, let's just accuse everyone who lives in the real world where you actually have to USE a language, not just admire its technical merits, of being low and middle brow.
I imagine a language needs a certain rate of adoption to maintain the amount of programmers actively coding in that language. I feel like the term "momentum" accurately encompasses this idea.
>Lets talk about Justin Bieber!
I don't understand why you invoked the name of a Canadian popstar. There is no chance of such a comment adding to the discussion.
There is a strong argument to be made that software profession does have some resemblance to pop music culture. "[Pop culture] it has nothing to do with cooperation, the past or the future — it's living in the present." Alan Kay 'made this rather salient observation a while back and the more I think about it the more truth I find in it.
I'll bite: a relevant technical question would be: is D really that much better than C++11 that you should abandon C++ (with all of the consequences that would have on tooling/libraries/etc.)?
That is an excellent question. There is really a lot of cross-bleed between C++11 and D. In fact if not for anything else D was good competition to wake the C++ giant up into doing something about its deficiencies and backport as many D features as possible. I think this will continue to happen and and I think both the languages will benefit technically from this dynamics.
As for, should one abandon and existing C++ project and rewrite in D. Probably a bad idea, such a thing would require strong arguments. On the other hand for some project that is beginning now, I would say its evenly matched. The possibility of abandoning a lot of the C++ cruft and legacy is not something that should be ignored. Whether that compensates for the reduced maturity of tooling as compared to C++ has to be decided on a case by case basis, and this question is going to be a perennial question that would afflict the adoption of any new language. Its hard to have authoritative answers for this. That said I am looking forward to some improvements in D, better garbage collection, better separation of functions in the std library that uses garbage collection, getting the runtime memory requirements down for compiling D code that uses a lot of CTFE, and well if you could add sum types and pattern matching and efficient fibres/coroutines that would be very nice. I think fibres would be a tough one if one has to maintain portability and compatibility with C libraries.
Having used both, I'd say yes. C++11 does some wonderful things, many many improvements that were a long time coming, but D put those improvements into the core language and their standard library. This makes D much more readable and hackable than C++11. Not to mention the metaprogramming support is significantly easier to use (though not more powerful, as they're both technically turing complete).
D's metaprogramming is significantly more powerful. For example, string literals can be template arguments and can be manipulated at compile time and turned back into D code.
The phrase "C++ is an extremely fast language -- meaning software built with it runs at high speed" needs to be changed to "It's possible to build high speed software with C++".
I take any performance claims made by a particular language with a pretty big grain of salt (particularly those made vs Java/JVM) unless they're accompanied by some reasonably sophisticated benchmarks and source that actually show a performance difference -- not just one that low-level language enthusiasts presume exists.
And yes, languages are not slow or fast. The JVM could be as fast or even faster than native (AOT) because JIT can in theory optimize better than AOT. There are fixes planned for performance issues. Things have improved since the benchmark. It's only one particular benchmark. One specific implementation of VM/compilers/etc. Let me show you some badly written code in language X that runs slower than language Y on machine Z.
In my opinion JIT is always going to be slower than native (AOT). Maybe the gap will close but fundamentally you can do anything a JIT does AOT at zero run-time cost but not the other way around. That's not a language thing, C++ targetted at a JIT environment (and I think you can target .NET for example) will also run slower and languages that are JITted today could possibly be AOT.
The other thing is that if a specific language doesn't allow fine control over memory layouts and data types at the native level and/or if there are inherent costs in the language's design (e.g. duck typing) that is also something that will ultimately prevent it from running as fast as languages that do. That's because you can always build higher abstractions from primitive ones but you can rarely build a primitive one from a higher one. Very smart compilers may be able to identify a "primitive use case" and optimize for that but in general that's a much harder problem.
> fundamentally you can do anything a JIT does AOT
False. Amongst the things that JITs can do that AOTs cannot are optimisation based on profiling of the program executing with its actual inputs, interprocedural optimisation of dynamically linked code, interprocedural optimisation of code generated at runtime, and runtime specialisation [1].
I'm not sure there's a bright line between runtime profile-guided optimisation and runtime specialisation; however, i think most of the optimisation that production JITs do today is more reasonably classed as the former, rather than the latter. However, the other three are all happening right now in JVMs around the world.
I apologize for trying to be a little too clever here. There are two things that I'm trying to say:
1. If you want to you can use the same techniques JIT employs in your AOT program. You can basically embed a JIT in your program if you think that's beneficial for some specific scenario.
2. Whatever the JIT comes up with as an optimal solution for some specific run-time scenario, assuming it's really optimal, can be used in an AOT compiled program. I think the reality today is that JITs very rarely come up with an optimal solution anyways but if they did you can still embed that solution in an AOT compiled program so therefore they have no fundamental advantage.
There's no getting away from the fact that as you add more and more layers to the onion you are losing fine control and you are giving up performance. Assembly beats C (disallowing inline assembly) beats JIT (disallowing embedded native code). It could be argued that a human writing assembly can't match what tools can generate but that's not actually really true (given enough time) and even if it is true there's nothing stopping the human from using tools where appropriate.
> Maybe the gap will close but fundamentally you can do anything a JIT does AOT at zero run-time cost but not the other way around.
You can't inline virtual functions, which is the number 1 optimization done by the JVM (as it enables most other advanced optimizations). You can try achieving it with a profile-guided optimizing compiler, but a JIT can adapt its optimizations based on changes in the inputs (which trigger different code paths at different times in the program's lifetime). Also, the JVM inlines virtual calls even for code loaded at runtime.
> The other thing is that if a specific language doesn't allow fine control over memory layouts and data types at the native level...
True. Java 8 already takes steps in that direction with the @Contended annotation (that adds padding to prevent false-sharing), and Java 9 will let you have much more precise control over layout with value types.
On modern multi-processors, performance is also determined by your ability to use scalable (usually lock-free) concurrent data structures, and those are a lot easier to implement and use when you have a good GC.
You mentioned that on the other discussion. Can you tell us a little bit more about inlining of virtual functions? It doesn't sound like something hugely valuable because if you're performance driven you're already not using virtual functions for places where that overhead is unacceptable (and generic programming is what lets you do that).
I've done lots of multi-threaded programming in C++ some of it with lock-free structures. C++11 brings things like async, futures, promises etc. to the table in the standard library and there are also other concurrency libraries and tools (e.g. OpenMP). Can you expand on how GC would make my life easier in general? Surely it depends on the nature of the task you're trying to solve, locks aren't always the bottleneck and there's not always concurrent solutions.
> if you're performance driven you're already not using virtual functions for places where that overhead is unacceptable
Well, that can only be true for some very localized computations. Virtual method calls are the basis for polymorphism (virtual method calls can even be used when implementing functional languages' pattern matching -- the functional form of polymorphism), and polymorphism is the basis for most programming abstractions. If your code is interesting enough, it will have lots of virtual calls. If it doesn't, then you're probably doing something very specific, and C/C++ might be a better option indeed.
> Can you expand on how GC would make my life easier in general?
Sure (I guess you mean re concurrent data structures; otherwise it simply saves you the pain of manual memory management). The basic principle of most (all?) lock-free data structures is that multiple threads might be reading a single node of the DS (which may contain stale data) and then try to CAS a new node into the DS. Without GC, it's very hard to determine when all threads have stopped examining a given node so that it can be safely deallocated.
See the following discussions on the difficulty of implementing lock-free data structures in C++ (b/c of lack of GC):
I don't find myself using a lot of virtual functions in C++ mostly because between trying to prefer composition to inheritance and generic programming you can avoid a lot of typical use cases for polymorphism. So we don't need everything to derive from Object in order to have vector<Object>. There are definitely some areas where polymorphism is the most elegant solution/abstraction and the only place you have to be a little bit careful is around your performance bottlenecks. What I mean is that if you're looping through a bunch of objects and calling o->render() the actual virtual call will usually be minor vs. the work of rendering said object. Also all these objects have a different implementation of render - right? So I'm not sure what's to inline (but I will read up your references). If you're AOT optimizing a scenario like this you'd probably want to separate out those objects and process them all in a group based on their type. It'll make your code suckier and less maintainable, you give up your nice abstraction, but faster. This is something a compiler can't do for you upfront because you would be essentially changing your design to get speed.
So honestly the performance of virtual functions in C++ isn't something I've ever seen as a performance issue (which probably means my code isn't very interesting ;-)
I will second your observation. I have often observed (purely anecdotal) that runtime polymorphism gets used gratuitously even when compile time polymorphism would have sufficed. In my C++ code I seem to be reaching for CRTP fairly often and it does its job. In this case there would be little benefit to be had by inlining virtuals. In fact modern compilers do take a stab at devirtualization, but of course they surely cannot be as informed as a runtime system. Without adequate support to detect exhaustiveness, compile time polymorphism does have the drawback that one could forget to account for all the different cases. This is an instance where Ocaml does better. Sure exhaustiveness can fail, but it requires far less discipline to code in a way that it doesnt. Sure, in future I might need to add more cases, but the beauty is that the type system will tell me all the places where I need to handle the newly inserted case. This takes care of majority of the use case of virtual functions. Exhaustiveness checked pattern matching is a killer feature !
Side note, compile time polymorphism ought to be easier to do in D than C++.
I dont have a problem with JIT. In fact the later the system emits actual code the more information it has at its disposal which it can presumably use to emit better code.
What I disagree with, however, is the widespread defense of the practice of emitting crappy byte code with the justification that we will fix the mess at runtime when and if poor codegen becomes a problem. This self inflicted pessimization of moving that starting block behind is what bugs me. For a scripting language I can buy this, a script could potentially start faster. But Java (the current mainstream poster boy of a JITed language) is no scripting language.
When I am compiling the code with optimization I have more time at my disposal, make use of it. I will even help you (the compiler) with profiles collected from previous runs if I think they would be helpful. If that means spending effort optimizing parts of the code that never get run in future, that is fine, I can tolerate that because I have time now. If some of that effort is wasted, fine I can take that hit, sky wont fall. On the other hand when my system is running, I am indeed short of time and very little budget for optimization and analysis, so cannot run very deep optimizations. Not all tasks can afford JIT burn in.
For me a superior way would be to optimize hard AOT, clear all the low hanging fruits. Have JIT hooks that can change code to adapt to runtime characteristics to mop up whatever further benefits that remain.
> On the other hand when my system is running, I am indeed short of time and very little budget for optimization and analysis, so cannot run very deep optimizations. Not all tasks can afford JIT burn in.
Well, server side Java apps are meant to run for hours, days or months. They certainly have the time for a few seconds of compilation (the JIT is profile-guided, so it knows where it should spend effort optimizing, so the whole process takes no more than a few seconds, amortized over the application's first couple of minutes). For long-running server side apps, the JIT is just as negligible as AOT compilation (even if you count the time when the application is running uncompiled/unoptimized, which, for Java, may take 2-120 seconds or so). AOT only improves startup time. But AOT does have an impact on short-lived apps (for which Java currently isn't the best choice), and on mobile apps (where you'd rather not spend battery on compilation). Which is why Oracle is now testing a cached JIT for Java 9.
> For me a superior way would be to optimize hard AOT, clear all the low hanging fruits. Have JIT hooks that can change code to adapt to runtime characteristics to mop up whatever further benefits that remain.
A cached JIT which would achieve pretty much the same goal.
Is there any language/environment that tries to combine the two? I.e JIT for a few iterations and then "settling" on the best optimization and caching it?
The Azul JVM has been doing this for some time. The Sun/Oracle/OpenJDK JVM started doing it in JDK 7. I think IBM's J9 JVM has been doing it for some time too. I'm sure Microsoft's CLR VM does it, they're a smart bunch.
I think Java is exploring that for version 9, but this can only improve startup times. When you consider the entire lifetime of the program, JIT compilation is negligible, so there's no point in caching the results (other than to improve startup time). Also, the JVM never (AFAIK) "settles" on a specific machine instructions. It will always be on the lookout for optimizations that are better adapted to the current execution profile.
Well, a JIT-compiled program can be substantially faster than an AOT-compiled one, given that running time is large enough. Which it is for services on 24/7 (that's more throughput, but you get my drift)
The proof is not in the pudding in this case. You say "In my opinion JIT is always going to be slower than native (AOT)" and I say that you're wrong. The optimizations that can be done ahead of time are a subset of the optimizations that can be done at runtime. That it's hard to write a really good JIT compiler has nothing to do with it. Neither has the size of the problem. Always is a dangerous word. But I can hear myself being pedantic; for problems of small sizes where start-up time is important (most problems) AOT will be better.
The proof is in the pudding as to the state of the art today. What might be possible is a different question. I thought you said that JIT is faster than AOT today and that's something you have to prove (and I don't think you'll be able to, simply because it's not).
> The optimizations that can be done ahead of time are a subset of the optimizations that can be done at runtime.
I don't think so. It's the other way around. That's because ahead-of-time you can do anything that takes any amount of time, including running the program. As long as your actual problem is predictable you can always do a better job AOT. In fact the process that a lot of performance driven AOT development goes through is running the program through sample data, analysing the results, including looking at the generated code, and figuring out ways to make it run faster. If the fastest solution for a specific problem involves generating native code then your AOT program can generate native code (if you want to say that's cheating than go ahead :-) ). The mere ability to produce your native code upfront is a performance advantage and there is no disadvantage. In theory including compilation time in run-time, over long periods, makes that delta as small as you want but it's still a delta and also we've not approached the theoretical bound in real world programs.
To be absolutely honest, when taken to the extreme there is no AOT or JIT. The thing is that people usually refer to JIT as spending some limited effort during run-time to generate some native code for some specific code sections. At the extreme JIT and AOT merge.
Well, I think you could be a bit more specific than that and reasonably say something like "If speed is a high priority for you, then C++ is often a good choice." That's why PC games are generally written in C++.
I hear that argument a lot, and yet I have never seen real-life software that runs comparably in Java and C++. For example, a few weeks ago, I was looking at some (fairly simple) data processing code backed by SQLite. The Java version was too slow for the amount of data that needed to get analyzed and reanalyzed repeatedly. A rewrite in C++, with no special trickery, ran 3x faster.
After isolating everything in Java (including ditching the JDBC driver for lower-level SQLite bindings), nothing changed. It was likely JNI overhead, but frankly, that doesn't matter: Java was too slow. No sophisticated benchmarks here — just real-life production code that runs like molasses in the JVM and runs acceptably in C++.
Also, C++11 is not a low-level language anymore. (And I say this as a long-time Lisp and Clojure programmer.)
I think the point is that claiming speed is a property of languages is simply a category error, like saying programming languages are green or noisy. These can be properties of programs (well, properties of a given execution of a program) but have nothing to do with languages.
For languages with exactly one useable implementation, the distinction is rather academic. In the case of C++, as far as I know, every production-quality compiler produces code which runs circles around code produced by any Java compiler.
In addition, some languages have features which cause them to be more difficult to optimize. (See, e.g., the restrict keyword in C99.)
It's also important to remember that Java is still in this position even after almost 20 years of development. And this is development undertaken by large companies that were or are quite well funded, and have had very talented individuals work on this technology.
If they've yet to get consistent C++-level performance out of Java or the JVM, it's most likely factors inherent to those technologies that are to blame.
I think most programmers I know have heard of D. When I took at look at it years ago, I concluded that the main problem was that it tried to please everyone. There's simply 'too many' features without a core theme to drive programming.
In a fun and meta sort of way you are saying D has not tried to please you. There are structural problems with this kind of criticism btw I have upvoted your comment, I do share the vision of a small elegant language yet sufficiently powerful language. At the same time I if some parts of D were to be thrown out I would find a way to complain about it. In fact one of my gripes is that it does not have compile time sum types and pattern matching, that and lightweight and efficient fibres (I believe these abstractions still use OS's threading mechanism underneath, I would be very pleased to be proven wrong).
The thing is D initially wanted to be C++ done right and then C++ done better. So with that in mind it has to have a lot of features. I do appreciate a lot that D feels a lot more homogeneous and designed rather than built up with unrelenting accretion the like C++. An aspect that I quite like is that if it is not immediately clear what should be the right way to resolve a design problem (for example name clash between function overloading and specialization) the D tendency is to make it into an error till the right solution/semantics emerges. In C++ sometimes the choice is somewhat arbitrary: here we have k choices lets pick one and go with it even if it has nothing obviously convincing going for it.
I would like to draw your eye towards templates and template metaprogramming. It really feels bolted on C++, and its really tortuous to use. In D however, although it was added much later it gels very nicely with the rest of the languages. Template metaprogramming is nothing special in D. Its just a function like any other except that it and its arguments are evaluated at compile time. (for D developers reading this, please can we have reference counting for CTFE).
So yes you are correct, it has a lot of features, given D's design scope that is hard to avoid, but these features do fit well enough with each other for a practical language to be used in anger, and fit a lot better than its immediate rival C++ / Java.
The problem with low level languages is they don't live in isolation.
As someone who worked on a large project in Ada, you'll need to at some point use the OS and its libraries which for Unix is in C. Want to network or get a OS semaphore? C. Many languages support this call to the C library functionality, but it always seems to force a lot of things back into the C way of doing things.
Maybe with a runtime (Java!, or ada tasks) this is abstracted away, but its still there..
Many of the libraries you'd want to use are already ported or binary compatible (just need a header file) with D. Given that a fibre-based web framework has been written with it, and that it can use pointers natively, there's not a whole lot you can't do with it. Even writing bare metal code should in theory be doable with some work.
Programming languages are so previous century. The future is environments that move away from the machine and deep inside the user's mind. Automation taken into extremes.
More importantly, does it contain an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp? If not then I really don't see it having a future.
True--in principle any language can be executed in the JS runtime. A lot of them can even be executed reasonably quickly. Though I suppose the parent comment was asking whether a mature compiler from D to JS currently exists.