Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Specification for the D Programming Language (dlang.org)
175 points by crazypython on Jan 8, 2020 | hide | past | favorite | 141 comments


I started reading about D couple of months ago after work. For me, it appeared to be easier than Scala for example which I use at work. Not only is it simpler, it is faster and I am not even talking about compile times. These features alone persuaded me to rewrite couple of Scala based data processing algorithms we use for our ML models to give it a try. What I really liked is that you don't have to know all about the language to write perfomant code. Still keep wondering how does it get constantly overlooked.


> Still keep wondering how does it get constantly overlooked.

Because you kee thinking in technological terms. Languages have come and gone, those who stayed are either old enough to have a midlife crisis or are supported by at least one big company. For all the merits D has there is no big name willing to push it forward, even though its creator is now working for Facebook and is doing some internal work: Facebook didn't express any interest in showing this off.

It really is a shame, because D has a lot of things that might interest programmers, all it takes is a little more presence.



and andrei is a major contributor but not the creator.


Plus he's no longer involved with D.


Not true -- he has just stepped back from a leadership role afaict.


He is. He just stepped down from the leadership due to family reasons. Atila Neves took it over now.


This seems to be an increasingly growing niche where D is gaining popularity. I see more and more people turning to D where they would normally use python or R for adhoc data crunching


We are using D for high performance computational biology -- I brought a Go dev and a python dev both onboard with minimal learning curve. Even the powerful template metaprogramming seemed really easy for the team to pick up. I believe this is because it really has a nice design (compared with, say, C++, which I would never, ever use in general bioinformatics unless team had specific past expertise)


I also use D for data & ML at work (Netflix) when standard python workflow doesn't cut it anymore. There's just so far you can go with the typical "call a popular python lib interfacing with a C++ backend" paradigm before hitting a brick wall.


How far is that? What about Cython?


As one of those people, I can say the reason is for the C-level processing power with the approachable syntactic sugar. Out of all the languages gaining popularity for their ability to run natively (Rust, Go, D, etc), I've felt that D is the most natural continuation of C/C++, at least syntactically. I haven't touched the language in about a year, but I'm personally rooting for it.


And with D as Better C, (-betterC switch), it's easy to convert your C project to D, one file at a time, and not require the D runtime library (only the standard C library is required).

The original impetus for this was so I could incrementally convert the D compiler backend itself from "C with Classes" to D.


I try to use D for that (difficult because the libraries are fairly low level, I would say sparse but calling into C and C++ is pretty trivial) because, for the same effort I put into bullying python to do what I want, I can write type safe (generic) code that is both actually readable and ready to be reused if needs be. And sometimes orders of magnitude faster.

That and ranges (D/Andrei's preferred model for iteration) are excellent - in my view at least.


eBay's tsv-utils author here. A fair bit of performance benchmarking was done on the tools with the goal of exploring this type of use (data crunching). D did really well (tsv-utils are fast!). There's more info on the benchmarks page in the github repo. Perhaps the best summary is the slides from a talk I did at 2018 DConf. Links:

- tsv-utils repo: https://github.com/eBay/tsv-utils

- Performance studies: https://github.com/eBay/tsv-utils/blob/master/docs/Performan...

- Talk slides: https://github.com/eBay/tsv-utils/blob/master/docs/dconf2018...


It doesn't even need to be an alternative to Python or R, since it's really easy to interoperate with both languages. This is in addition to tools like eBay's tsv-utils.

http://code.dlang.org/packages/pyd

https://embedr.netlify.com/

http://code.dlang.org/packages/autowrap

https://github.com/eBay/tsv-utils


The big issue with D, in terms of no-overhead is that is has garbage collection by default, and if I understand correctly, the garbage collector is not as advanced as Java's or Go's.

While you can use D without the garbage collector, you will lose access to a lot of the standard library, which limits the practicality of running without garbage collection.


> is not as advanced as Java's or Go's

An advanced GC requires having "write gates" inserted into the generated code. These take away from performance, but in a language like Java that makes very heavy use of the GC, the gain in GC performance outweighs the slower code.

D is not a GC heavy language, even if all you use is the GC, hence it is a poor tradeoff. (For example, in D you can use the stack for a great deal of the routine allocations, whereas in Java these are often done with the GC. Java has optimizations to try and detect when allocations can be done on the stack, but this isn't as effective as proactively putting them on the stack.)


> An advanced GC requires having "write gates" inserted into the generated code

This isn’t strictly true. In particular it is possible to design a GC such that write barriers are implemented in hardware (these days using the mmu not dedicated hardware). It’s also possible to write a (non-incremental non-concurrent non-parallel) gc that uses no write barriers: just stop the world, collect and move.

The mmu-for-write-barrier technique I’ve seen is to keep pointer colour in high bits and map the same physical memory onto n copies of itself in virtual memory (so you don’t need to mask the colour bits away before dereferencing). When the mutator tries to write to a black pointer (or maybe pointers are coloured by whether their targets might have moved?), the gc gets a memory exception and can ensure that the invariants are maintained. The general assumption is that this doesn’t happen very often in most programs. I can’t remember the name of the system I’m thinking of but it’s one of the new GCs being developed for java.

An alternative method to avoid a write barrier is avoiding mutation in your programming language. This makes GC simpler. There are other advantages and disadvantages


I implemented a write gate using the MMU once. It was quite a bit slower, and I abandoned it.


You can avoid using the GC by using malloc/free. You can also use -betterC and D will only need to be linked with the C standard library.

> you will lose access to a lot of the standard library

This is incorrect. Not much of it needs the GC anymore.


It's my estimation that this misperception of D (that it requires GC) will take a while to unlearn/relearn.


Not only of D, plenty of GC enabled languages (including .NET based ones) provide such capabilities, but for some reason many learn wrongly that having a GC means 100% heap usage with GC only allocations.


I think most of it comes down to not knowing how a GC works internally.


I think it is a mixture.

Your remark, additionally being taught programming on languages that indeed use the GC heap allocations for everything, and on those that allow fine grained control over resources it is usually left for "exercise to the reader", so many don't care and so aren't aware of what they are losing.


Since D has scope guards - any plans to add deterministic destructors? i.e. RAII a la C++.


Struct destructors are always deterministic: https://dlang.org/spec/struct.html#struct-destructor

Class destructors can be declared deterministic with the "scope" keyword: https://dlang.org/spec/attribute.html#scope

Scope guards are useful if you cannot change the data type itself.


They've been there... I think since the beginning. If not since the beginning for at last ten years now.


Already there.

P.S. click the link to see the spec...


While the opinion-pendulum is currently on the contra-GC side, I still believe GC is the correct default. Mostly because GC is easier than manual memory management.

If you have real-time needs, then GC is probably out. That includes handling audio and video playback.

Tight memory constraints are another killer argument against GC but that only applies to small embedded devices. Smartphones, desktops, and servers have enough RAM.

The great thing about D is that you can start the easy way with a GC. When real-time or memory become constraints at some point you can selectively adapt parts of your program.

You could say, D wants to provide Python and Rust and everything in between so that you can shift seamlessly to wherever your specific sweet spot is. Different parts of your program can even use different spots. In a game engine, you could have the audio and rendering parts as non-GC real-time code, while the game logic is easy high-level script-like.

We are very used to a scripting language plus compiled language combo these days but is the separation a good one?


> If you have real-time needs, then GC is probably out. That includes handling audio and video playback.

You'd be surprised. At first our D audio products had GC enabled, and that created very little problems. When we got rid of the D GC for other reasons (macOS + druntime portability in shared library), we found that we only gain in memory usage. The GC was already "tamed" to happen outside real time threads. This is described in: https://www.auburnsounds.com/blog/2016-11-10_Running-D-witho...

The conclusion:

> Long story short, we brute-forced our way into having fully @nogc programs, with the runtime left uninitialized.

> As expected this fixed macOS Sierra compatibility. As a bonus Panagement and Graillon are now using 2x less memory, which is nice, but hardly life-changing.

> We found no magical speed enhancement. Speed-wise nothing changed. Not registering threads in callbacks did not bring any meaningful gain. GC pauses were already never happening so disabling the GC did not help.

> In conclusion, it is still our opinion that outside niche requirements, there isn't enough reasons to depart from the D runtime and its GC.

And indeed now we dearly miss the GC...

For sure you wouldn't allocate in tight loops. But the portion of a program that can accomodate a GC is very much larger than is commonly told on the internet.

As for video, it is way less sensitive to scheduler pauses than audio. Video systems are full of allocations (often of commited virtual memory) because throughput is often more important than latency there.


Don't rule real-time out. Real-time GC's exist. People just need to build open ones for these languages. Example write-up on this:

http://michaelrbernste.in/2013/06/03/real-time-garbage-colle...


In my experience (.NET, including mobile and embedded) when handled with care, GC is often fine even for audio and video.

The trick is not loading the GC too much, neither in terms of allocations/second nor bytes/second (for multimedia it means recycling buffers as opposed to creating/destroying), also having enough RAM so you don’t need to use 80% of installed memory.


Just yesterday I posted a link about real time audio processing done in Java.

Anti-GC hate seems to always come from religious objection or having had back luck not understanding how to actually use the tools at their disposal.


Most of anti-GC hate seems to be coming from C++ and Rust programmers. Also game developers seem to hate GC with passion, then they go back to writing their games with Javascript and C#.


It comes from long time.

One thing that having been an Oberon user teached me, was that having a GC wasn't a show stopper for systems programming.

Ironically C++ has has GC support for more than 20 years now.

Managed C++ and C++/CLI on .NET, C++/CX on UWP, Unreal C++, and the GC API introduced in C++11.

So it seems to be one of those things that you only learn to enjoy after experiencing it personally.


You also have a lot of control over when the GC runs. You can manually disable it and run collection cycles, and it only runs when you allocate. You can also mark a section as @nogc to ensure there's no GC usage.


> is not as advanced as Java's or Go's.

golang's GC isn't advanced either.


Depends what you mean by advanced. In many ways it's more closer to achieving it's stated goals then Java's.


Javas GCs already achieve their goals. The JVM has several GCs to select from based on the needs such as: high throughput, very large heap sizes (up to the TB range), and there are two in experimental phase for very low latencies (ZGC and Shenandoah).


> very large heap sizes

It is not heap size that matters, but how often old objects are mutated. Basically, if you have 1TB of immutable objects, you are lucky, but if you constantly modify all the heap, then you will have a lot of headache.

GC authors continuously mislead on this issue for many years (they promised the same with multi gigabyte heaps but with G1 several years ago).


> Javas GCs already achieve their goals.

So does Go. Otherwise it wouldn't be anywhere as popular specially with regards to server-side usage by giants of the industry.


Google's stamp helps.

Java GC don't need allocation tricks of several GBs, like Twich was forced to do.


Twitch did that since they preferred to use code over environment configuration. They could have achieved similar results using the GOGC environment variable but didn't want too; They even said so in there post.


I'm not sure what qualified for an advanced GC but Go is the leading GC with Java and C# so ...

https://blog.golang.org/ismmkeynote


If you are interested on that topic these two posts are a must read:

- https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...

- https://blog.plan99.net/modern-garbage-collection-part-2-1c8...

The main takeaway is that saying that Go is the leading GC is just a (somewhat dishonest) PR move, and Go's team made a pretty good GC given there project constraints, even of it is not at the state of the art at the moment.


This is must read only for GC theory enthusiasts. Java GC is advanced and better performing but that is adjusted by Java program's heavy garbage generation. So overall gain by advanced GC is little if any in Java.


Do we know that Java programs generate more garbage than Go programs? This is often stated, but I've never seen any actual measurements of it. Remember that Java HotSpot has escape analysis just as Go does.

I would not be surprised if the difference in amount of "garbage"—as measured in either bytes or object count/mark time—were a lot smaller than people think.


Java is getting value types. It will be interesting to see how performance improves once it does.


I never said that it was state of the art or "the" leading, my point is Go GC is currently a good all-arround GC, and I've read that article, I think the author is bias against Go and currently there is no article even proving that the 2 new Java GC are faster than the Go one. ( or even faster than G1 )

You know when you end your article with "Overall, it looks to me like the Java guys are winning the low latency game." when none of those GC are actually production ready ... Go GC low latency has been stable and used for some time now.

Out of the box Go GC is very good and doesn't need all the memory or the tunning from G1. ( I spend years tunning GC for Java application it's a nightmare )


This doesn't actually refute anything in Mike Hearn's posts.

And isn't Shenandoah included by default in OpenJDK 12+?


In non-Oracle builds, yes, along with ZGC.

Oracle chooses to build with only ZGC enabled.


The page you link to does not compare Go's GC to Java's or C#'s


I don't know if it's a consequence of their GC, but go is significantly slower than Java or C#.


You're mistaken, Go is on part with both languages, usually faster.


My understanding is that it has lower latency but also lower throughput.

Although most of the benchmarks I've seen are at lease 2 years old. It would be nice to see some updated benchmarks that also compare against the Shenandoah and ZGC garbage collectors for the JVM.


Java is getting two new low latency collectors. golang is stuck with a GC that favors latency over throughput, making it not suited for high throughput processing.


By stuck you mean Go devs made the sensible decision to have only one, zero configuration, ultra low latency, Garbage Collector that suits Go's common use cases instead of trying to be jack of all trades having many GC's each with their pletora of knobs to adjust.


This is not a sensible decision, it is an idealistic one, and Go will backtrack on it if it lives long enough. People have different use cases, and they will be more than willing to turn a few knobs if it saves them millions of dollars.


It means that golang is relegated to simple server side tools, and not suited for a broader range of programs like the JVM is.


Yep. Go has a focus.

That's how you get from zero to heavy usage in companies like Uber, Twitter, Cloudflare, BBC, Basecamp, Canonical etc... in less than a decade.


The same kind of companies that were all hyped up on Ruby on Rails, and then moved to Java when performance mattered.

Hype driven development is a thing.


Given a large enough sample size, there will always be companies which ditched technology X for Y.

Unless technology X is irrelevantant.


You'd be surprised at the extent hype influences decision making when it comes to choosing which language to use (even in companies you mentioned), at a significant cost down the line. I'll just say I know people at one of the companies you mentioned, and the amount of friction and cost due to them using golang is quite remarkable.


No offense but I'd rather trust the engineers of all these companies combined over an anecdote on the internet.

Not to mention technology change always generate friction. You just can't please everyone.


Assuming those engineers were making sound decisions and not pimping up the CVs at employers' expense.


Exactly what happened at the company I'm talking about.


For real world, non-trivial programs, golang lags behind Java/JVM.


Well, on the benchmarckgame (internet reference for microbenchmarks), you're mostly right, go is faster than Java but slower than c#.

But real-world applications performance needs fine tuned libraries that scales and this is where Java destroy both go and c#.


Take the benchmarkgames site with a big grain of salt. They're running on an old processor, and that biases results a lot. For example, someone submitted a patch that significantly sped up a Java benchmark by using `Math.fma()`. However, because the processor the benchmarks site is running on is old and does not support the underlying instruction that `Math.fma()` uses, it ended up way slower.

I ran several benchmarks locally on a CPU that's a few years old where the benchmarks site shows golang being faster than Java, without changing the code, and I got the opposite result (Java was faster). Especially the longer the program ran. For example, the site shows that golang is faster in the spectral norm benchmark. Running it locally with a larger input parameter:

golang:

    time ./test 30000
    1.274224153
           18.81 real        70.19 user         0.38 sys
Java:

    time java spectralnorm1 30000
    1.274224153
           16.18 real       105.95 user         0.45 sys
Or with the nbody benchmark:

golang:

    time ./nbody 60000000
    -0.169075164
    -0.169012474
            6.09 real         6.04 user         0.02 sys
Java:

    time java nbody 60000000
    -0.169075164
    -0.169012474
            6.04 real         5.94 user         0.06 sys


FastHTTP destroys Netty, isn't Netty a fined tuned library?


Apples to oranges comparison. Netty is a general purpose event driven networking library. FastHTTP is specialized to HTTP. Even given that, Vertx (which uses Netty) ranks above FastHTTP in the Techempower benchmarks, even though they have to be taken with a grain of salt.


Netty is not tuned. It handles large stream of small messages very inefficiently: too many synchronization per message, no automatic batching to avoid expensive write syscalls. At least it was so two years ago.


I'm not sure - I don't use either go or Java much these days - but some cursory googling implies that light-4j/undertow core is to fasthttp as netty is to net/http?

https://github.com/networknt/light-4j/


FastHTTP is a low allocation HTTP library in golang. Netty is a low level event driven framework for writing networking applications and protocols. It's used in many libraries and frameworks which build on top of it like Vertx, Spring webflux, and more recently Undertow.

Things should get more interesting once Java gets green threads, which would make it easier to implement high performance concurrent servers without needing to drop down to event driven code unless for specific needs. That, and with value types, should push the performance bar forward.


I'd say the opposite. I wouldn't say it's an issue, but personally I wish D didn't offer alternative memory management options. Other languages like Go or C# don't even give you a choice, so people just accept it and move on. But every discussion on D always turns into the "evil GC" discussion. I use a lot of D and was never bothered by the GC and I find it easier to work with than manual memory management. I guess I could live with automatic refcounting too but I doubt it could be integrated well into the language at this point.


You can manually use free/allocation and use `scope` classes to avoid GC. Closures only use GC when you don't explicitly delete stuff.


I wish I had known about D 10 years ago. It's everything I ever wanted out of an LLVM language, but I feel like it's dying and the package ecosystem is tiny, which is crazy with how old it is. What they need is some new web design and docs and exciting stuff to make it feel like a new language so people recognize how truly awesome it is :/


> I feel like it's dying

I can't control how someone else feels, but I can tell you your feeling is very wrong. Just a few of the things people are being paid to work on right now:

Android support

Webassembly

Symmetry Autumn of Code projects

There will also be paid work on iOS once they find someone that can do the job (if they haven't already). And yet another annual DConf will be taking place in London in June.

These are just a few of the examples where folks are putting real money into the D ecosystem. The language continues to evolve (for example, moving to safe by default). This is by far the most active D has been in the seven years I've been using it.


> There will also be paid work on iOS once they find someone that can do the job

AFAIK adam is going to work on it, once he finishes up with the android stuff.


I found D 10 years before and it had the same stale feeling it seems to irradiating today and this is turning people away

The truth is it was good enough for most uses already but people waited out for someone to step up and make things better before they felt it would be worth trying D

D really shines if you do greenfield projects where you have the luxury to reinvent some wheels. Its surprisingly pleasant to see a lot of boilerplate one might expect to accumulate to just not be there because it’s either generated or is just not necessary because the language is flexible enough


Plus affine types have gone mainstream. Good stuff, but I'd like memory safety!


An ownership/borrowing system memory safety is in the works for D. There's a prototype of it as a Pull Request:

https://github.com/dlang/dmd/pull/10586


What is the type of apps that one should use D for ? I think this is one of the main issues with languages that did not gain traction and that is they are too generic and try to be too many things to too many people.

If i exclude the financial backing i think languages that tried to tackle one specific thing did rather well. Rust came to be to address the issue with security and there is not really an alternative. Yes people use it for cli tools or backend services but that’s just because they are riding the hype train.

The same with Go.

On the other hand we have things like Dart which did not really try to solve an issue people felt they have. The same with Crystal. Nobody really needed faster Ruby that much and if they did they already needed a production ready solution which was just too hard to deliver.

A language like Nim reminds me of D. Its author is also a brilliant Phd CS person but I feel like he does not really have a problem to solve. It’s all just an intellectual challenge. Companies like Status.im ( backer of Nim ) have a problem that they are trying to solve by using Nim. Making the right and the perfect solutions is hard but this evolutionary approach is really harming the expansion of the ecosystem. Nim has just too many features and too many approaches to memory management (recent gc:arc release). But it is all too half baked and does not support the features Nim advertises that it has.

This is basically the same issue i have with D. I have no idea what I would use D for. And could i rely on D in the domain i pick it for ? Eg the same field where Go excels.


Applicability is not the main issue. D can be used for anything where speed matters, gamedev, data processing, machine learning, kernel programming. It is not a niche language. But as it is always with low profile languages that grow out of a sheer hobbyist talent and not backed or picked up by big companies it remains in its own humble world for years. On top of that D was unfortunate to have some pretty rough early period with transition from D1 to D2, two standard libs, several compilers, etc. etc. All this would not be an issue given there were hired teams working on it and big enough community. It all worked well against D. However, lurking around D forums and available libs left me with a good impression. Yeah, some tooling is missing and might be rough around the edges but there is plenty of work if you want to get your hands dirty. In addition D has a very helpful community of very skilled engineers imho (something I sometimes miss in other languages...).


> Rust came to be to address the issue with security and there is not really an alternative.

That will change in the coming months.

> I have no idea what I would use D for. And could i rely on D in the domain i pick it for ?

D is a general programming language. You can use it for scripts, number crunching, UI, and everything in between. It's been around a long time, and is part of the gcc collection.


Walter, I love your language conceptually as it runs fast and has a simple command line compiler (no .net framework hell or Java .jar madness).

However, most of my experience is with Python, Perl, Bash...etc. So I don't have a lot of Java, C#, C++ experience. With D, this sometimes bites me as all the doc seems to assume I'm familiar with that world. There is that one beginners book which is cool, but it only covers the absolute basics and I still get confused traversing stdlib.

Still, Thank you for all the work you've put in over the years!


I found D quite easy to pick upcoming from a Perl and Js background. One could get up to speed quite fast and it has a plethora of language features that will get you engaged for years to come, as opposed to Go which keeps the language as simple as possible.


I think the gcc backend is quite attractive! rustc still has some cases it doesn't catch the optimizations gcc does.


> Rust came to be to address the issue with security and there is not really an alternative.

Ada has been an alternative since 1983.

Oberon was an alternative in 1992.

Modula-3 was an alternative in 1988.

And while they don't fix use after free, Object Pascal dialects, Modula-2 have been better options to C in safety terms since ages.

What Rust has coming for it, is being on the forefront bringing affine types into mainstream languages and being more appealing to younger generations than a language like Ada.

This doesn't mean Rust is here to take it all.

In fact, I don't have any specific use case where Rust would excel, AOT compiled languages with value types and automatic memory management fulfill much better my use cases.


There are a lot of places D does well, but if I had to pick one for which it is ideally suited, I'd say replacing C on a Linux/Mac/Windows desktop for app development or scientific computing. C is simply obsolete in those domains since D offers basically a superset of the functionality. In terms of interoperability, you can even #include C header files.


Unfortunately with scientific computing the only place I see D could be writing custom performance critical algorithms but then again for university folk it is more straightforward to do it in C++ or C since there are plenty of code snippets lying around.

There is excellent mir library but its documentation is subpar, no tutorials or any examples too. There is Netflix Vectorflow small deep learning library but only for CPU and only feed-forward networks, so it works for some specific narrow case. There is fastest on earth csv parsing library TSV-utilities from one of the ebay engineers but I have only learnt about it when I started looking through D website resources, also no tutorials. There are tools but using them needs more time investment than alternatives.


I actually have quite a lot of tools available that I've built up over the years. I wish I had more time to work on it. I have a matrix algebra library that in my opinion is extremely convenient to use that I'm preparing to release complete with documentation in the next couple months.

I have everything available in R at my disposal, because it's easy to embed an R interpreter inside a D program. Note that this does not always give poor performance, because the R code calls into compiled code or you can just call the compiled code directly using the R interface.

For numerical optimization, I call the R optimization routines directly (i.e., the C library functions).

For basic model estimation, I call the Gretl library.

For statistical functions (evaluating distributions and such) I can call into the R API or Gretl.

For random number generation, I can call into Gretl, R or GSL. I have parallel random number generation code that I ported from Java.

For machine learning (I do a limited amount like lasso) I call R functions. The overhead with that is so low that there's no point in not just calling the R functions directly.

So things are there. It's just a matter of finding the time to turn it into something others can use. Right now I'm focused on doing that with my linear algebra library.


Oh hello friend, you've piqued my interest as someone interested in D, numerical optimization, and linear algebra. I have some questions though.

How do you do numerical optimization in D? Do you somehow wrap Coin-OR's CBC C++ library or lp_solve? What does it mean to call the R functions directly? Do you have an example? I'm going to guess that won't be able to handle the massive and time critical models I use, but am still curious.

How do you do linear algebra? Are you binding to BLAS, LAPACK, Armadillo? Or did you write some routines from scratch?


A couple of points to make before I give my answer. You want to check out the Mir project linked in the other comment. It's a well-designed library (though maybe lacking documentation). The other thing is that you can #include C headers directly in a D file using https://github.com/atilaneves/dpp so you can always costlessly add a C library to your project.

For optimization, I was referring to calling into the R API, which exposes the optimization routines in base R (Nelder-Mead, BFGS, Conjugate Gradient, and simple bounds constrainted BFGS). In terms of what it can handle, I guess that's entirely up to what R can handle. Here's the project page, but it looks like I haven't committed to that repo in three years: https://bitbucket.org/bachmeil/dmdoptim/src/master/

If you do try it and have problems with anything, please create an issue so I can fix it or add proper documentation.

I've also used this binding of the nlopt library: http://code.dlang.org/packages/libnlopt

For linear algebra, I built a wrapper on top of the Gretl library http://gretl.sourceforge.net/

There were two reasons for that. First, it offered a really simple LAPACK interface when I was starting out with D, and second, it offers a lot more than just linear algebra.

Is this something I'd recommend to others? I don't know. I built my infrastructure over a period of several years while waiting for my son at his many practices and activities. I also optimize for programmer convenience rather than performance at all costs. The time it takes to write correct, performant code is far more valuable than having code that runs 15% faster.

This has a lot of potential, but ultimately I'm paid to do other things, meaning those things become the priority...


Thanks for the replies. I'll take a look at ur stuff.


Not the guy you're asking, but you can check D based Mir GLAS library as an alternative to the usual suspects, and it performed best of them all [1]

[1] http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...


This is great, please do! It would be nice to share it not only on dlang forum too. How does it compare to scid btw?

I like R but generally use it for basic stat tasks and plotting instead of Python. It would awesome if you could share your experience on how to set it up with D in blog post or whatever form you find useful.

D lacks this kind of tutorial material so much.


I've been writing up a summary of how I use D to put on my website. Maybe this is the push I need to finish it.

About scid[0], I looked at it when I started, but it seemed to be largely inactive by that time, it didn't do what I needed, and the documentation wasn't really good enough. I was also turned off by the excessively generic nature of everything - there were just too many templates. At least that's what I recall.

[0] http://code.dlang.org/packages/scid


> Rust came to be to address the issue with security and there is not really an alternative.

What does Rust have to do with security? It's not even compliant with any modern security standards (at least not that I'm aware of).

> I have no idea what I would use D for.

This makes no sense. D is a general-purpose language, just like Rust, Nim and Go (and it does concurrency better than Go, IMO). It just happens to be the best general-purpose language that I've ever used. The language gets out of your way and lets you focus on solving problems in the domain you're working in.


> Rust came to be to address the issue with security and there is not really an alternative. Yes people use it for cli tools or backend services but that’s just because they are riding the hype train.

I like Rust for CLI tools because it takes some nice ML features and it handles strings sensibly.


Strings in Rust are the first time I ever got confused about what a String was. It might be sensible, but it's not easy.


Last time I tried to install a Rust-based cli network tool it took literally 4 min to compile it! Seriously, 4 minutes for something that monitors your network load. If this is what it takes to make a cli tool, I'd rather do it in plain BASH :)


You could do the same with OCaml.


D can be used for almost everything: this is indeed a marketing message that is harder to pass than "Language X is made for problem Y".

For example I do signal processing, and D is surprinsingly pleasant for it since it has builtin complex numbers.

Being well-balanced, generic, readable, productive, and performant seemingly doesn't convince more than "this solves problem Y". Probably why marketers use segmentation.

But at the end of the day, more power is more power and every specialized language tend to become general-purpose.


You're being too reductionist.

In the specific case of Rust, no security is not its primary strength. In fact, it has no standard and a few soundness issues, and I'm not sure what certifications it has, so this makes little sense.

What Rust does have is very impressive. It has a type system that can express a very large subset of data race-free and memory safe programs. It's one of the few languages to get strings right. It has a fantastic package manager. It has destructors, so no more "defer f.Close()". It has sum types, so your state machines actually look like state machines, and "if err != nil { return err; }" is just "?".

Most of these features aren't unique to Rust at all, but their combination definitely is, so I hope this unintentional advertisement explains why "cli tools or backend services" developers like Rust.


that is really exciting!


Indubitably!


Thanks!


D makes most memory safety bugs disappear through its runtime bounds checking and garbage collection, both proven solutions in a variety of languages.

Yes, Walter is working on more compile time stuff too, but the existing runtime checks really do excellent work and shouldn't be overlooked.


the package ecosystem was like 10 times bigger in 2019 than it was in 2015 when I had my first look at D


It's cool to see D on hacker news but isn't more interesting content related to D that could have been linked instead? The table of contents to the spec is pretty bland.


My only experience with D is getting the free onedrive client to run on linux[1]. What's more frustrating than the small ecosystem is the fact that the reference compiler, DMD, is limited to x86/x86_64 and you need a working D compiler to compile the primary alternative, LDC2 which can't bootstrap itself.

In order to get something like that to compile for aarch64, you have to build a recent GCC with the very old dmd frontend enabled, and then you can compile LDC2. This all to use one app!

[1] https://github.com/abraunegg/onedrive/


Why would you have to bootstrap the compiler to simply install an app?


Because nobody had a binary of the app for my architecture?


Does D have something like Rust's #[derive(Debug)] or #[derive(Hash)]?


Standard library doesn’t provide similar functionality but it’s trivial to implement a mixin template that would implement print or hash function for any type

That being said standard formatting function can automatically print most types including user types. This eliminates necessity for automatic formatting implementation in majority of situations


Not really required because serialization is trivial in D.

D's compilation isn't "stateful" like that in the sense that (user-defined) attributes both cannot influence the thing they are attached to or do anything without being read by something else.


You can help D's ecosystem by contributing a CTFE that frees memory as it executes, and by implementing struct postblits.


> and by implementing struct postblits

Eh?


There's no weekend warriors on the D. You're either on it... or you haven't tried it.


Is there a reason for posting this link? Like the specs has just been published or something?

I wish people gave some context of why they post links without context and why I should be interested in the link.


Has anyone had luck building a D program with static cURL? The D distributions ive seen only supply a DLL, and I have messed with some compiler flags but it always wants to search for a DLL.


(2001)?


If it's an article that hasn't been updated since 2001 sure. But this has been updated fairly recently, in this year.


What are the biggest recent changes?



DIP1021 is one step toward a larger goal outlined in the blog post 'Ownership and Borrowing in D'. What are the next steps?

https://github.com/dlang/DIPs/blob/master/DIPs/accepted/DIP1...


A prototype for the O/B system:

https://github.com/dlang/dmd/pull/10586


This is really fascinating, are there some benefits you foresee of this addition to D? I know the O/B system has been popularized by Rust, but have not really thought about it outside of Rust, is this an efforts towards -betterC or will it be used towards those efforts?


An O/B system is a way to:

1. eliminate double frees 2. eliminate use-after-free 3. eliminate memory leaks 4. using undefined pointers

when using explicit memory management like malloc/free. Any D application, including DasBetterC, that uses explicit memory management can benefit from it.


D is by far my favourite niche language, due to its clean syntax and ease of use. I am surprised it’s not gaining traction.


D is not different enough from C++.

It's like C# to Java, which has traction only through relentless flogging by Microsoft. There's nobody to flog D.


It used to be that way, but these last 5 years, C# and .Net have grown a lot. If given the choice, i would choose C# every time; Such a pleasant language to program in. All desktops support, native binary compilation, easy (easier?) interop with native code. Lot's of features to promote performance, like structs by reference, explicit stack allocation etc. Well I'm a game developer so for me it's perfect; That said, i remember reading D's specification and really liking it. As it never gained much of a traction with gamedev I looked away. Who knows, maybe in the future.


D today is C++2040.


Nah, because C++40 will just accumulate more and more unpredictable cruft while D is clean.


D is not clean. It already has some cruft. For example, there should be no "@safe" annotation but an "@unsafe" one. Unfortunately, D picked the wrong default back then.

Ok, "in relation to C++" it is clean. :)

Edit: There is a proposal to fix it: https://github.com/dlang/DIPs/blob/master/DIPs/DIP1028.md


Pardon the pun, but @safe is a safe choice. IMHO, for general language it is more productive by having low hanging fruits as the default. The popularity follows consumer->enterprise->military->aerospace, and safety gets more critical up the systems food chain. That's propably the main reason Ada is not a common or popular programming language since it is for higher end of the food chain.


This makes more sense as an argument for dynamic typing or GC by default than a “safe” keyword, imo.

From my experience with Rust, there’s basically _never_ any cost in sticking to safe code. The standard library and crates pretty much always give you enough tools.

Unsafe is really only ever needed if you’re implementing the core of an abstraction, which is probably not something that’s going to deter “casual” users imo.

The borrow checker and and type system definitely can be thorny at times, though. Other times they’re very easy to work with! But having a more ergonomic way to fall back to Java-like behavior, if only for debug builds, would make languages like Rust _much_ more accessible.


D's nascent ownership/borrowing checker will operate at the function level. So you can incrementally use it as it suits your application.


Rust appears to have a strong push on hackernews, and perhaps DLang should push harder in here. I remember first time i used this language on a friday night on my raspebrry pi and it just worked. The learning curve was pretty low for doing basic stuff such as controlling gpios. Wondering if d lang should perhaps target web devs more, as an easy way to extend, say, nodejs or write wasm apps. It really looks promising.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: