
D as a Better C - ingve
http://dlang.org/blog/2017/08/23/d-as-a-better-c/
======
christophilus
I tinkered with D a bit (along with Nim, Dart, and C as a comparison) while
writing a Clojure interpreter.

D seems to be a nice language, but I found the editor integration wasn't the
best. I also found it annoying that the docs used `auto` all the time, so you
could never figure out the right type annotations for their APIs (e.g. I want
to call the foo API and return its value from my method, but the docs all use
`auto` to refer to its return value, so I don't know how to annotate my
method.)

I liked Nim better in almost every way but one: the compiler was fickle and
would just fail silently sometimes.

~~~
dom96
(Disclaimer: one of the Nim core devs here)

Happy to see you paint Nim in a (sort of) positive light, I hope I can help
with this one negative.

> I liked Nim better in almost every way but one: the compiler was fickle and
> would just fail silently sometimes.

Can you give some examples and elaborate on what you mean by "fail silently"?
Did you at least get a segfault?

~~~
e12e
Since it's already brought up in this thread, I recently had (another) look at
nim and was a little disappointed too.

It's been a while, but as I recall, on Windows - it was difficult to find a
supported way to output Unicode on the console in a sane, portable way (write
utf8 nim source code, get wide strings in Windows console and utf8 under eg
Linux) - and I think I also had some problems getting off the ground with
statically linking sdl2 (again on Windows).

The build/package/config system appear to in a bit of flux?

[ED: and I think there were old open issues and forum posts for both problems,
hence no new bug reports...]

~~~
dom96
Indeed there are some bugs with unicode output[1]. Just to be clear the
standard way to output anything to the console is `echo` (similar to Python's
`print`).

It seems that the progress on that issue has stalled. It's unfortunate but we
do have limited man-power and rely on the community to help us out, this does
mean that at this stage it helps if you don't mind getting your hands dirty
with potential bugs.

Unless one of us are directly affected by a bug, or we see a large number of
users affected by it, it's unlikely that we will focus on it. Not much we can
do there sadly. I hope you can understand.

Regarding static linking I would say that it's challenging for most languages.
Especially on Windows. If you pop into our IRC/Gitter channel[2] though I'm
sure somebody would help you out. You can also probably get pretty far by
asking C users for help.

Regarding the build system, perhaps you are referring to the fairly recent
addition of NimScript? I think that has stabilised by now. But if you've got
any specific questions about it then feel free to ask them.

1 - [https://github.com/nim-lang/Nim/issues/2348](https://github.com/nim-
lang/Nim/issues/2348)

2 - [https://nim-lang.org/community.html](https://nim-lang.org/community.html)

~~~
e12e
As I'm now on my laptop with access to the two simple programs; for text
output:

    
    
      import encodings
      var hello1 = convert("Hellø, wørld!", "850", "UTF-8")
    
      # Doesn't work - seems to think current codepage is utf8.
      var hello2 = convert("Hellø, wørld!", getCurrentEncoding(), "UTF-8")
    
      # Outputs correct text:
      echo hello1
      # Outputs corrupted text:
      echo hello2
    

(And simply outputting an unconverted string fails, like hello2 does).

As for the linking problem I had, the main issue is that it seems to be harder
than it probably should, to get started with the GUI-programming with nim on
windows. The following code works, _provided_ one manually obtains sdl2.dll
(and sdl2.lib for static linking) - however, even with static linking flags
(no errors), the resulting exe still depends on the presence of sdl2.dll:

    
    
      import nimx.window
      import nimx.text_field
      import nimx.system_logger
    
      proc startApp() =
        var window = newWindow(newRect(40, 40, 800, 600))
        let label = newLabel(newRect(20, 20, 150, 20))
        label.text = "Hellø, wørld!"
        window.addSubview(label)
    
      runApplication:
        startApp()
    

With sdl2.dll (and .lib) in the same folder both of these work:

nim --threads:on --dynlibOverride:SDL2 --passL:SDL2.lib c -r win.nim

nim --threads:on c -r win.nim

but both fail without sdl2.dll present (ie: the "statically" linked exe still
depends on dynamically loading sdl2.dll).

And there's so far no easy way of getting a "supported" sdl2.dll to go with
the nim compiler - as far as I can tell, neither "nimble install sdl2" or
"nimble install nimx" provide a way to get the sdl2.dll and/or C source code
to compile it.

But perhaps managing DLLs and such is considered to be out-of-scope for
nimble/nim package manager for now.

~~~
FullyFunctional
Ha, your comment was my first introduction to Nim but it looks interesting so
I "brew install nim" and tried your test (thanks!) with "nim c test.nim &&
./test" on macOS and my results is the opposite of your: hello1 is garbled
while hello2 works.

I bit of googling found the next steps "nimble install minx" but for compiling
your win.nim example I had to use "nim --noMain --threads:on c -r win.nim" but
the result worked. However AFAIK, there's no official way to make statically
linked binaries on the latest macOS.

------
systems
D is betting hard on memory safety, and apparently system programming

honestly i think, had they gone in the direction of D as a better Python, and
application development, they would have made bigger wins (in terms of
popularity) ... better tooling, better ide, refactoring, better GC, better
libraries ... better faster programs

~~~
nindalf
The better Python market isn't an easy one to crack because its a bit crowded.
Go (despite its perceived and real faults) has succeeded in this space by
delivering better GC, good libraries, static typing and faster programs.
Python itself is improving rapidly, for example with the addition of Type
Hints. Its pretty difficult to be a better Python in 2017.

The better C market, on the other hand, hasn't seen any real contender other
than C++ gain traction for decades. Rust is trying now and it could get there,
but its still a massive opportunity.

~~~
new299
I really really don't been to pile on Python... but every time I've had to
interact with it I've been shocked at how slow it is compared with C or C++. I
tend to write scientific code to process datasets in the range on 10Gb, for
simple operations Python code can take hours as opposed to just taking minutes
or seconds in C.

I'm sure it's possible to write more highly optimized code in Python, but it
never seems to be the case with the code bases I've worked with. With my own
code (a few years ago now), I spent significant time optimizing and the final
result was something that was still significantly slower than C/C++. Overall,
for my work, I couldn't see an advantage.

Am I just doing it wrong? Currently Go seems far more appealing to me and I've
enjoyed using it (maybe I should also try Rust).

~~~
saghm
Python is a high-level interpreted language, and C/C++ are low-level (even
compared to other) compiled languages. While you might be able to optimize
your Python code to run faster than it does now, it's never going to match the
performance of C/C++, nor is it intended to.

Go will be a significant speedup over Python, but likely won't quite match the
speed of C/C++ for most tasks. Then again, the ease of development in Go will
likely be noticeably better than in C or C++. Rust could potentially match the
speed of C++, but it's a much more complex language than Go, so it will take
some time to master. (Personally, having spent a large amount of time
programming in Rust, I find myself more productive than in Go due to the
powerful abstractions present in the former and lacking in the latter, but
conventional wisdom is that this won't be the case for most people, and it
certainly won't be when you're first learning Rust).

EDIT: Anecdotally I've heard that D is a nice language, but I have no
experience with it, so I can't comment on how productive it is or how fast D
code will run.

~~~
pjmlp
You can easily get best of both worlds with Lisp derived languages.

~~~
sythe2o0
It's been my understanding that Lisp and its relatives aren't designed to
compete against C for speed either. Is this wrong?

~~~
flavio81
"How to make Lisp go faster than C" (2006)
[http://www.iaeng.org/IJCS/issues_v32/issue_4/IJCS_32_4_19.pd...](http://www.iaeng.org/IJCS/issues_v32/issue_4/IJCS_32_4_19.pdf)

And these figures aren't using the SBCL Lisp compiler which should currently
be faster than the one cited.

You are correct, it wasn't designed to compete against C. And it has a garbage
collector. But make the right choices and a compiler like SBCL can produce
surprisingly "clean" (optimized) machine language code.

~~~
saghm
C compilers have presumably gotten faster since then as well, of course

------
benlorenzetti
I like Walter's writing. He is very articulate; terse but in a polished way.
Relies on you knowing a thing or two first though.

------
altotrees
I have really been enjoying working with Rust lately. I also enjoyed working
with Go prior to that. Maybe I will give D a try too, see if I have similar
luck.

I can find things I like and dislike about each of the new languages I try,
but it kind of feels like there are a glut of options for me right now, and
each one has held mostly positive surprises. Good problem to have: too many
new languages that don't give me a headache, hard to choose.

------
Ace17
Why remove RAII?

It's not fundamentally incompatible with "Better C" semantics, especially if
there are no exceptions.

~~~
WalterBright
RAII requires exceptions to be correct. I hesitate to say it it has RAII
otherwise.

Exceptions have two issues:

1\. D exceptions currently require the GC. There is a Pull Request to fix
that, but it isn't incorporated yet.

2\. More problematic is the Dwarf exception handling mechanism requires a
language specific "personality" handler. This is supplied by the D runtime
library. Trying to trick the C runtime library one into working with D isn't a
solved problem at the moment.

~~~
millstone
Why does RAII require exceptions? RAII is routinely used in C++ with -fno-
exceptions.

~~~
WalterBright
Because D code may sit in between code that throws and exception and code that
handles it. Without EH support, the RAII destructor won't get called when the
stack is unwound.

~~~
dbaupp
Isn't it generally undefined behaviour to have exceptions pass through code
that isn't compiled with support for them? I.e. not running some destructors
is the least of your worries if an exception hits C/-fno-exceptions/-betterC
code.

~~~
WalterBright
Undefined behavior or not, implementations of languages in gcc and Windows
tend to support it, even if the language itself does not. I consider it my job
to make this work if at all practical, because users do not like ugly
surprises.

I don't know how Rust's RAII deals with exceptions thrown by lower level code.
If someone better acquainted with Rust could chime in here, it would be
interesting.

~~~
steveklabnik
> I don't know how Rust's RAII deals with exceptions thrown by lower level
> code.

Unwinding across an FFI boundary is considered undefined behavior.

[https://doc.rust-lang.org/reference/behavior-considered-
unde...](https://doc.rust-lang.org/reference/behavior-considered-
undefined.html#behavior-considered-undefined) last bullet point.

~~~
WalterBright
That's exactly what I wanted to know. Thanks!

Full D is compatible with foreign exceptions and unwinding. I'll see about
making D as Better C also compatible, but I suppose Rust's precedent makes it
acceptable to not handle it.

~~~
steveklabnik
No problem!

This isn't my super strong area of expertise, but don't you have to specify
_how_ the unwinding works? I believe this is part of the reason why Rust
punted; you'd have to specify exactly what kind is supported. (Rust uses
libunwind for panics, incidentally.)

~~~
WalterBright
The _how_ is handled by providing a language-specific "personality" function
in the generated exception tables. The D runtime library has one for D. But
with D as Better C, I'd have to find a way to make the C personality function
work for D.

~~~
steveklabnik
Ah right, makes sense. Thanks :)

------
fithisux
"Exceptions, typeid, static construction/destruction, RAII, and unittests are
removed"

Thank you guys

"But it is possible we can find ways to add them back in."

Over my dead body

------
jfaucett
Does anyone know what the package management story is like for D? And also
what is the D ecosystem like for embedded systems?

~~~
pjmlp
It is called dub.

[https://code.dlang.org](https://code.dlang.org)

Regarding embedded, there were some progress lately, but still needs some
improvements.

------
codyguy
I have tried D, loved the language but I feel the ecosystem and environment
issues need to be addressed asap. Here are a few inputs (1) setting up env
different operating systems eg GNU/linux variants. Compilers ldc, etc are not
really easy to setup on all systems.

(2) Memory usage during compile can get out of control resulting in mid-build
crashes if sufficient memory is not available.

(3) Setting up of a simple http server should be made simple. (ideally the way
Go allows to convert any regular Go program to a http server)

~~~
pirocks
High memory usage could be cursed by Ctfe code. There is an existing project
to improve the Ctfe interpreter, called newctfe.

------
steveklabnik
> What may be initially most important to C programmers is memory safety in
> the form of array overflow checking, no more stray pointers into expired
> stack frames, and guaranteed initialization of locals.

Are there any docs about this anywhere?
[http://dlang.org/spec/betterc.html](http://dlang.org/spec/betterc.html)
doesn't really describe this stuff.

~~~
zombinedev
It is not described in the betterC page as these features are not new for D.
The betterC page only describes the differences versus the full D feature set.

Documentation for the individual features mentioned:

[http://dlang.org/spec/function.html#safe-
functions](http://dlang.org/spec/function.html#safe-functions)

> array overflow checking

[http://dlang.org/spec/arrays.html#bounds](http://dlang.org/spec/arrays.html#bounds)

> guaranteed initialization of locals

[http://dlang.org/spec/type.html](http://dlang.org/spec/type.html)
[http://dlang.org/spec/declaration.html#void_init](http://dlang.org/spec/declaration.html#void_init)

> no more stray pointers into expired stack frames

DIP1000 describes D's scoped pointers approach to memory safety -
[https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md](https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md).
This is work in progress and the document doesn't reflect latest state of
things. Conceptually, scoped slices/pointers/references are similar to concept
of borrowing that you guys have in Rust.

~~~
steveklabnik
Great, thanks! That makes sense, but it wasn't clear to me in the post.

------
lobster_johnson
Is "BetterC" only for DMD? I see that LCD (the LLVM front end) also appears to
have "\--betterC" flag.

------
srcmap
I love to see someone prove D as a Better C by porting the "small" sqlite 124K
line of C code to D and run some benchmark/test code against it.

I used to work on OO database engine that handles billions of records.

I end up rewriting/overloading my own new/delete and redesign everything how
data is load/store around it. Shorten the open/close document time from hours
to seconds for large documents.

Basically, one can easily mmap billions of records directly into data
structures accessible by C API easily in a few ms. One can dispose billions of
records also in a few ms. I saw similar design patterns from the sqlite code
base.

Language with memory safety (GC) design in mind won't allow anything like that
as far as I know.

I love to see some D experts prove me wrong.

~~~
jmh530
Could you recommend some resources on these techniques?

~~~
anonfunction
[https://stackoverflow.com/questions/45972/mmap-vs-reading-
bl...](https://stackoverflow.com/questions/45972/mmap-vs-reading-blocks)

------
zxy_xyz
Funnily, the D code looks more verbose than the C code.

~~~
zombinedev
Yes, in this example, but once you start looking at more interesting pieces of
code it becomes close to impossible for C beat D without heavy use of macros
on the code readability front. Check the runnable examples on the front page -
[http://dlang.org/](http://dlang.org/).

------
Rhinobird
D is one better than C, but K goes all the way to 11.

~~~
gjm11
(This is a slightly funnier joke than it may appear at first glance, because
as well as being the name of another programming language K is in fact the
11th letter of the alphabet.)

~~~
wentoodeep
M is 2 steps ahead of K

------
MichaelBurge
If I'm writing C, it's usually a small module or algorithm that I link into
another language. Since D is supposed to be in gcc now, I wonder if it's
reasonable to start including D code in e.g. Perl or Ruby or Haskell
libraries.

The great thing about C is once you write it, you never really have to worry
about the language changing 15 years down-the-line and breaking your code. The
GNU version might have a similar property, since GNU tools stay around for a
long time.

------
fithisux
Now we need another flag to enable a better Java mode. By default D is in
better C++ mode.

~~~
jmh530
How about betterC# instead? Given that C++/CLR is a thing, I imagine that it
would be possible to get D working on it. JVM is a bit more limited in what it
allows vs. CLR. For instance, C# has the unsafe keyword that allows pointer
arithmetic, which would correspond to @system in D. @safe D would correspond
to the normal C# behavior on that front.

Probably a lot of work though and I'm not sure why anybody would bother.

~~~
srean
D already is and has been (better C#) when taken as a language. No need for a
command line flag.

------
YSFEJ4SWJUVU6

      int main(char** argv, int argc) {
    

Really? :-)

~~~
WalterBright
How embarrassing :-)

But it doesn't affetc the opertaion of the prorgam (I did test it).

Edit: fixed it

