Some here are asking about D. I saw this article recently, by a tech company founder (Infognition) using D heavily for his image- and video-related products. It was interesting:
I love how everyone is digging into the D/Rust/C++ flamewar and not discussing the project at hand.
It appears monolithic. There's actually very little content about what it's design goals are, what it's trying to accomplish, or even what interfaces it'll support. I'm assuming POSIX because everyone loves to re-write UNIX and there is a boat load of documentation/implementations and most of these are monolithic.
Actually I'm not targeting a POSIX structure at the moment, I'm more towards implementing a OO api for stuff.
Maybe I will target POSIX in the future or make a wrapper around my apis for it.
I'd assume an Object-Oriented language would be better at implementing a microkernel. Objects, in my mind, really represent the idea of a service well so I'd be excited to see what it would look like.
Too bad we're never going to get out 128 core SPARC machines with an open-source microkernel like was prophesied in the Linux-MINIX debate.
Have you checked out the GRUB features to load blob file systems for you into memory? It's one of the best ways to implement that kind of feature (for this sort of hardware).
Do you sit on some IRC somewhere? I'd be interested in talking to you about how you're approaching this work. Very cool
I think it'd be really interesting if someone tried rewriting the Linux kernel in D (or Rust for that matter) and then compared the performance, lines-of-code, etc.
Maybe it could even be done with just one small but critical portion and linked, so it isn't such a monumental task.
I would rather see someone take D, Rust, Swift, whateverX and implement something modern, following the concepts explored at Xerox PARC, ETHZ and MSR in terms of OS architectures and development environments.
I'm pretty sure the software that requires an OS that runs POSIX software is sufficiently catered to. You have your Linuxes of every sort of flavor anyone could ever want (ranging from LFS/Gentoo Stage 3/Arch for the tuners and minimalists, to the 'I want to have Docker/git at my OS level' NixOS-esque machines, to the Ubuntu's for the "anything-but-Microsoft" demographic. (Not to mention, your BSDs, your QNX's (+3 or 4 other POSIX RTOS that are well supported in their ecosystem), your AIXes for the DB2's, ... ) It's sufficiently been explored and the need has been catered to.
Pjmlp (correct me if I'm wrong) would rather see exploratory endeavors rather than Yet-another-POSIX-impl. (And for the sake of argument, should that new platform garner steam and you absolutely, 100%, completely and totally need POSIX support! you can get pretty far creating PE/ELF on a non-POSIX OS with an emulation layer with ~3-4% overhead[VMware had a few whitepapers on it, and Unity is a component in production that demonstrates such behavior])
And if he were motivated enough to write a POSIX impl from scratch, while not trivial, it shouldn't be too hard (see: BusyBox, Yocto/Poky, hell, remember the NetBSD days? Any idiot could port to platform foo in a weekend.)
Yep, that was my whole point. I'm curious about how much (if any) an improvement there would be by switching an existing OS (namely Linux) to a "more modern" language like D or Rust, which are purported to be better for writing OS kernels. You're not going to find that out definitively by writing a research OS in them; you need to write an actual OS kernel in it, which has a counterpart already written in C, so that you can compare directly.
Writing a research OS in D/Rust seems to me like someone claiming (before Tesla/LEAF were released) "electric motors are better for cars! Electric cars are superior to gas-powered cars!", but then instead of making an electric car that's actually comparable to a gas-powered car, they build an electric tricycle (velomobile) out of mostly bicycle components that weighs 40 pounds and seats one person and has a custom carbon-fiber fairing and a max speed of 30mph. Yeah, it's nice that you can build that, but it doesn't prove the assertion at all, because the vehicles are totally dissimilar and incomparable.
That sounds somewhat defeatist. Taken to it's logical extreme, wouldn't that mean we'll never have anything better than POSIX?
Or are you suggesting more of an 'embrace and extend' thing, where a new OS supports POSIX for backwards compatibility but adds whatever new API(s) and then tries to grow adoption of those slowly?
No, taken to its logical extreme, that means we'll never have anything better than Win32.
Basically, if you want to make an OS that's immediately useful, it has to run software that's available right now. That means either Windows (Win32 or .NET) or POSIX or maybe whatever Mac uses. That's it.
Now of course, you can try to build an OS that has backwards compatibility for one standard and adds a new one. MacOSX did that to an extent to ease the transition from their older version. Windows does that too, supporting multiple APIs (even POSIX at one point).
However, even here, these standards still require an OS to work a certain way underneath, which all current OSes do to a good extent. If you want to do something really, really different, it might not be so easy to build an emulation or compatibility layer. That's just the price of progress; if you want to do a clean-sheet design, you're going to sacrifice compatibility with everything that came before. If you build in backwards compatibility, that's going to limit how different you can be.
I think this is a valid question. They're pretty different languages in terms of design, but they're definitely worth comparing—their problem domains do overlap in spite of design differences.
I've used both, but can't claim to be much of an expert in either.
Similarities:
• Compile to fast, easy-to-deploy native code
• Abstraction continuum ranging from very low-level C-like style to a high-level more functional style
• Very easy to call C libraries from
• Type inferred and generally less syntactically noisy than C++
Neutral differences:
• D is garbage collected; Rust has a novel memory management system based on linear types. This means D can never compete with Rust in performance and real-time or soft-real-time systems. On the other hand, it means with Rust you will spend more time thinking about memory ownership and other important-but-eliminated-by-GC concerns. Both are easier to deal with than C or C++.
• Syntax: Rust's syntax is not-quite-C-like; D's is very C-like. It comes down to personal preference. I prefer Rust's.
D advantages:
• Metaprogramming: D's metaprogramming is more or less unmatched, though I haven't experimented with Rust compiler plugins at all. You can probably do some funky stuff with Rust, but it doesn't feel as integrated into the language as with D.
• Compiles super fast.
Rust advantages:
• Type system. Rust's type system is ML-esque and generally much more powerful. I suppose this might not be an advantage to everyone, but personally I love how ML-y Rust feels.
• Concurrency.
• Ecosystem?
I think ultimately my choice is that if I'm going to use a compiled GC language, I may as well just use OCaml. D's memory management, or lack thereof, kills it for me personally, but it's a well designed language otherwise IMO.
Re: concurrency, D's concurrency story is really quite good. It may not be built upon as sound a logical model as Rust's (few languages can compete there), but it has very good ergonomics, and offers a nice range of concurrency strategies. See std.concurrency, std.parallelism, and vibe.d:
std.parallelism and vibe.d in particular are absolute pleasures to work with. As a framework, vibe.d is well designed to avoid unnecessary allocation and runs very efficiently.
D's current GC is quite poor, that's a common complaint. But the recent and forthcoming "nogc" changes are making a significant difference -- it's increasingly easy to avoid the GC altogether.
> it means with Rust you will spend more time thinking about memory ownership and other important-but-eliminated-by-GC concerns. Both are easier to deal with than C or C++.
I think an important wrinkle here is that Rust's ownership model helps prevent many runtime-errors other than memory safety. I'm not sure how D handles problems like iterator invalidation, but I've found it really nice to always know what's able to mutate the state when in a given variable.
D offers type modifiers that include the notion of transitive immutability. This is very useful in function signatures in determining what a function may be modifying without needing to rely on comments.
1) While the majority of all users will profit from the convenience of the GC, it's not a big deal to use D without a GC. (apart from this kernel, another famous example is the next-level large-scale storage company Weka. They have had such real-time concerns that they invented their own Ethernet protocol - more details: http://dconf.org/2016/talks/zvibel.html)
Moreover since last summer D got an amazing support of allocation building block with it's new Allocator API:
http://dlang.org/phobos/std_experimental_allocator.html
There are a couple of advantages of D that are important to me and you didn't mention:
- error-saving contracts (builtin unittests, assert, in & out blocks, invariants)
- OOP (abstract classes, interfaces, ...)
- mixins & code generation
- compile time function evaluation and introspection (this makes your code even faster)
- clean syntax & style (@property methods, lambdas, ...)
D's GC really could use some love, but they are working on it this year's GSoC.
As for using GC in a systems programming language, even last a few days ago we had the links from Xerox Star. Originally developed in Mesa, which eventually became Cedar, that made use of GC.
Also the whole set of ETHZ Oberon workstations.
And many other experiences.
The oldest I can find is a Flex computer system developed by the UK Royal Navy in Algol-68RS.
I am pretty convinced, that just like e.g. JavaScript performance, there needs to be someone with enough money to bully the industry with such technologies until they stop being tabu.
I still remember reading from C guys on Usenet that C++ would never be usable for writing OSes.
That's true. You actually can write an RTOS in D, and it would still be more pleasant than in C (thanks to D's excellent metaprogramming). The problem I have with opt-out GC in D is that it leaves you with very little else as far as memory safety goes—you have RAII, but nothing like Rust or Cyclone or other languages that are explicitly safe without a GC. You can leak, you can crash, you can have race conditions, etc, almost as easily as in C. Some sort of D version of unique_ptr would get you part of the way there, but it's still not as complete as Rust.
D does have @safe, which does a good job of preventing large classes of unsafe pointer issues. It is not perfect yet (we're working on it) but it is far better than "nothing like", and far better than C/C++.
Sure, I was just kind of brainstorming how much better GCed operating systems could be.
As for RTOS, maybe you eventually could assuming future major improvements and compilers like the Atego JDK, but there is little to gain versus what Ada, SPARK and eventually Rust already offer today.
For hard realtime systems most kinds of dynamic memory allocations are a no-go. A lot of programs with such requirements have everything statically allocated, others use some static memory pools and custom allocators that those. I think that should also work with D - with the problem of losing lots of the standard library.
I think the most problematic area is somewhere in between: Applications that want a mostly realtime performance without having to put too much work on guaranteeing it (e.g. Games, Low Latency Audio Processing, ...).
For some, D's garbage collection isn't quite neutral.
Rust, for example, can be used to write a C-like loadable or linkable module for some other system (JNI, Ruby, Python, etc) with a very tiny or nonexistent runtime required.
I wouldn't be very sure about the claim that "D can never compete with Rust in performance and real-time or soft-real-time systems". If anything, due to much better metaprogramming (as you mention), you can have D programs that ARE faster than Rust (eg: see JSON parsing benchmark [1]).
You can write programs in D that have ZERO garbage collection.
In fact, the core libraries (phobos) are being actively rewritten to remove GC altogether.
The D "fastjson" version is not a general solution:
> I think it is not necessary to validate data you are not using. So basically
> I only scan them so much as to find where they end. Granted it is a bit of
> optimization for a benchmark, but is actually handy in real-life as well.
> After all you could still raise the validation level to maximum if you really
> cared or call one of the validation functions.
shameless plug, you can use
https://github.com/tamediadigital/asdf
it should offer comparable performance to fastjson while having all the nice features.
Personally I find D a much more comfortable option. I tried Rust a few times, and it contains a few nice features ('match', expressional if-else, built-in tuples/multiple return) which would be cool to see in something like D, but I found the overt emphasis in Rust on safety to be entirely too aggravating to work with. Perhaps I might try it again in the future, though, who knows.
To any Rust pros here, I'm curious: how long did it take for you to really get to grips with the language, coming from say, a C/C++ background?
Once you internalise the rules, it clicks and becomes second nature, and you learn to really appreciate the explicitness. Lifetime/ownership errors go way down, and the ones you do get are usually trivial to fix.
In saying that, as with any powerful type system there are always going to become some confounding cases, but for me the trade-off is worth it. It also makes the complexity of your domain more evident up front with shifts experimentation to the compile time phase as opposed to the run-time phase. I prefer the former though, because it is always a joy to have your code compile after a big refactor, and have it run pretty much as you expect.
Although the GC is there and is the default, often D projects use various other memory management schemes in critical places. You can use malloc/free, custom allocators, stack allocators, and RAII in the usual ways as required.
> 10x better theorists. Of the three, Rust is the only language with world-class PL theorists on roster. This can be seen in the precise definition of the language and the depth of its technical approach
This argument impressed me. I'm sure Walter Bright does account for a fair share of D's popularity, though, so I take the 10x to mean in numbers :)
Isn't Rust's syntax technically more Algol-like than C/C++/Go/D are? Anyways, I don't think it's worth calling that a disadvantage, at least with the way it's stated. I think "high learning curve" is a lot more accurate in this circumstance. And I'd disagree that the syntax serves no purpose, I think it's specifically designed to reinforce the notions of Rust's programming style (for example, implicit returns because Rust code is designed to always return something instead of using void functions). I'd be surprised if the syntax didn't help shorten code in at least a few places.
Hehe, I saw a talk with the creators of Rust, D, C++ and Go. No one of the "older" fellas even knew the Rust guy and I don't even know anymore who he was, was it Hoare? Maybe.
One of the reasons that "Rust" is the name is that Rust _doesn't_ try things that are "new and shiny." Or rather, it depends on where you come from: coming from the PLT perspective, many of the ideas in Rust are pretty old, but if you come from industry, they seem very new.
(Also, the person who was in the video you're talking about was Niko Matsakis, not Graydon. And it was August 2014, so, so much has changed with Rust since then.)
Well, I don't know about how long any of these "ideas" were around somewhere buried in academia. What counts is that Rust tries things differently than Go, D or C++, so it's "new" in terms of an "implementation in broader use"
Yo I came across a comment you made a while back asking whether there was a scripting language that does rust-like lifetime checking. That thread's archived now so I have to tell you here, it exists https://github.com/pistondevelopers/dyon
I don't really see the point, personally. But you seemed to really want it so
I'm not under the impression that all that much of what Rust does, besides borrow checking, is necessarily all that new. Aren't most the other things from other languages that have used them to good effect?
But look at the functional stuff. I mean how much of this is really in broad use?
Yes at the moment there seems to be a functional renaissance, but before that 90% of the programmers around the world tought of stuff like Haskell or OCaml as crazy academical experiments.
What "functional stuff" are you referring to? I don't consider Rust a particularly functional language, other than the fact that it has closures and the ability to `map` over iterators. And if closures and `map` are your benchmarks for functional stuff, then Python, Javascript, and Perl have been doing functional stuff for decades now. :)
Iterators are monadic (since they have a flat_map method), Option and Result have the and_then method. Both of these are equivalent to >>=/bind.
The fact that they don't implement pure/return doesn't change the fact that they have monadic interfaces because that would only be useful once it was generalized to its own trait, and such a function would be trivially implemented for any of these types. For example, Some(x) works as the return/pure function for Option and the fact that the name differs for each type is not an issue because there's no way to make code generic over all monads as it is. It would be like having a unique trait for each type where there's `bind_option` and `bind_result`, etc.
`Option` et al may have vaguely monadic interfaces, but don't you need `return` in order to satisfy the monad laws?
Regardless of what we determine there, I don't think that matters in the broader context of this thread, which concerns the notions that Rust has the potential to scare people away by including unfamiliar "functional stuff". But seeing an `.and_then` method doesn't scream "functional programming"; method chaining has been long popularized in Javascript, Ruby, etc. Furthermore, AFAIK there's nothing about Rust's pseudo-monadic interfaces on iterators that hasn't already been popularized by LINQ in C#.
Because it's a modern, safer systems language. It fills more or less the same niche as Rust except it's more familiar to C++ programmers, or at least that's been my experience. I know C++ fairly well, and I feel more at home with D than with Rust.
Why D?
http://www.infognition.com/blog/2014/why_d.html