
Hello World - ddevault
https://drewdevault.com/2020/01/04/Slow.html
======
phoe-krk
What is this post supposed to prove? It certainly is not supposed to prove
that a hello world is a representative real-world program, from which one
could infer that writing and debugging a real-world program in Julia is 835
times as complex as writing a real-world program in assembly, since the former
makes 835 times as many syscalls as an assembly program. (You know, that
number seems okay for me, except it needs to be applied to these languages in
reverse.)

I agree that software bloat is a big problem, but trivializing that problem to
printing a "hello world" to the screen, punishing all languages with runtimes
by measuring the syscalls involved in their startup routines, disregarding the
fact that many users are going to have a single system-wide runtime for e.g. C
or Python or Julia and therefore the total-kB number does not scale linearly
with the number of programs written in C or Python or Julia, ignoring the
massively increased software development and debugging time for writing in
low-level and memory-unsafe languages like assembly, static Zig, or C, and
directly implying[0] that most problems with software complexity can be solved
by writing in assembly, static Zig, or C rather than in Julia/Ruby/Java/all
other languages from the bottom 90% of the list (and that's the vibe that this
post gives me) is, for me, more about making a venting shitpost than creating
something that provides even a part of an actual solution to software bloat in
general.

The "more time your users are sitting there waiting for your program"
statement is especially amusing to me. Your users are not going to wait
shorter for your program because you and your team are taking another year to
write, test, and debug it in assembly.

[0] "These numbers are real. This is more complexity that someone has to
debug, more time your users are sitting there waiting for your program, less
disk space available for files which actually matter to the user."

~~~
einpoklum
> What is this post supposed to prove?

It proves that the compilers for many programming languages emit a lot of
instructions and a lot of system calls they could have optimized away
theoretically.

This is particularly true when there are lots of "startup routines". When the
program to be run is known, almost all of that startup can be found out to not
be necessary, and stripped from the final executable.

> Your users are not going to wait shorter for your program because you and
> your team are taking another year to write, test, and debug it in assembly.

They might, if your program is interactive and is used often enough. Now,
writing assembly is not very realistic, but considering a different language
with less overhead - is.

~~~
hokumguru
For all the daily articles that pop up here on HN concerning premature
optimization I find it surprising to see this thread and all the sentiment of
language choice mattering.

It’s far easier to work in and translate thoughts to more feature rich
languages. The vast majority of the time it’s not the language itself that
will be an optimization issue down the line but rather the written code.

~~~
overgard
Here's the problem, your language isn't the optimization issue... until it is.
You optimize all the hot-spots on your profile, and then... all the sudden
you're looking at a very big samey chart of everything taking slightly-too-
long even though you've optimized every possible thing and there's nothing
that particularly sticks and and there's nothing you can do about it. And then
you're totally fucked, because you're looking at a rewrite at that point;
probably in a language your team doesn't know and doesn't like.

Sometimes language matters! Not always, but frequently!

~~~
saagarjha
> you're looking at a rewrite at that point; probably in a language your team
> doesn't know and doesn't like

Or you commit atrocities like rip the garbage collector out of Python. I’m not
joking; Instagram did this: [https://instagram-engineering.com/dismissing-
python-garbage-...](https://instagram-engineering.com/dismissing-python-
garbage-collection-at-instagram-4dca40b29172)

~~~
lasagnaphil
That’s actually a pretty clever (and crazy) idea, because when you disable the
gc in Python the reference counting still works and only the cycle detection
part is vanished. So you are just dealing with reference semantics the same as
C++’s shared_ptr or Swift’s references, which isn’t that hard to use if you
are careful about cyclic refs. And if the memory leak exceeds a certain amount
they can just restart the process.

~~~
Doxin
Disabling the python GC like that isn't _that_ crazy. They didn't add
gc.disable() because you shouldn't use it. The crazy part is some library
calling the gc directly without a way to disable it. That's just bad form.

A lot of games in gc-ed languages turn the gc off, and then manually run the
gc whenever there's some spare time in a frame. The same thing should more or
less work for a server handling web requests too. Maybe run it after every
request or perhaps every couple of requests depending on memory usage.

------
majewsky
Two notes on Go. Since Drew is stripping the binaries for many other
languages, he should also do it for Go:

    
    
      go build -ldflags '-s -w' -o test test.go
    

Also, since he's complaining about the difficulty of obtaining a static build
in Go: There's an issue for that.
([https://github.com/golang/go/issues/26492](https://github.com/golang/go/issues/26492))
Drew definitely knows, he left an incredibly unhelpful comment there
yesterday.

~~~
Aeolun
You do have to wonder why nobody thought ‘this is getting a bit crazy’ before
though.

~~~
tptacek
You do?

~~~
Aeolun
It is known.

I guess I could have written: I think someone should have wondered whether
this was actually a reasonable way to make a statically linked binary before.

Maybe if you’ve gotten used to invoking the magic it doesn’t seem quite so
arcane anymore.

~~~
hundt
FWIW if you don't need cgo then CGO_ENABLED=0 is sufficient to get a static
binary. The tricky part is getting a static binary while also embedding C
libraries. That's what the github issue is about.

------
_paulc
As it's not on Drew's list:

Nim:

    
    
      $ cat hello.nim
      stdout.write("hello, world!\n")
    

Static (musl):

    
    
      $ nim --gcc.exe:musl-gcc --gcc.linkerexe:musl-gcc --passL:-static c -d:release hello.nim
      $ ldd ./hello
      not a dynamic executable
    
      Execution Time: 0m0.002s (real)
      Total Syscalls: 16
      Unique Syscalls: 8
      Size (KiB): 95K (78K stripped)
    

Dynamic (glibc):

    
    
      $ nim c -d:release hello.nim
      $ ldd ./hello
      linux-vdso.so.1 =>  (0x00007ffc994b6000)
      libdl.so.2 => /lib64/libdl.so.2 (0x00007f7c88785000)
      libc.so.6 => /lib64/libc.so.6 (0x00007f7c883b8000)
      /lib64/ld-linux-x86-64.so.2 (0x00007f7c88989000)
    
      Execution Time: 0m0.002s (real)
      Total Syscalls: 42
      Unique Syscalls: 13
      Size (KiB): 91K (79K stripped)
    

Which I think is actually pretty reasonable for a high-level GC'd language.

~~~
zamadatix
"Size (KiB): 95K (78K stripped)"

Seems suspicious that lines up with the 95.9 KiB the author listed for C + GCC
+ musl static build even though the author says they stripped the binary
after. I think they might have copied the wrong number into the table :).

The author was counting the size of dynamic as binary + dynamically linked
files. Should be about the same as the c dynamic ones in the table in this
case anyways but just a note to anyone else running their own tests.

~~~
NilsIRL
Nim compiles to C

------
cycloptic
Sorry but I have to give this a thumbs down for not being a very convincing or
well-written blog post. It dumps some data and then immediately jumps to a
statement about how "lots of syscalls = bad" without actually detailing what
those syscalls are doing in the context of the runtime. And I'm saying this as
someone who already runs Alpine on my servers and doesn't need to be
convinced. Drew, I think you can write much better posts than this.

~~~
nielsole
It's still an interesting table. I was surprised that Java is 10x faster than
Python. I would have expected initializing the JVM would be similarly complex
to initializing the Python Interpreter.

~~~
giantrobot
The author doesn't state the Java version but more recent version JREs (9+
IIRC) start up way faster than older versions. I'd imagine a JRE's launch time
is heavily influenced by disk cache. A warm launch ends up way faster than a
cold launch with the tens of megabytes of classes in RAM means the warm launch
basically loads the program's class file from disk.

~~~
kick
He literally does list the Java version, though...?

 _OpenJDK 11.0.5 JRE_

~~~
giantrobot
Well I guess I missed it.

------
thamer
Haskell:
[https://gist.github.com/nicolasff/fe5668194769c175d9d4dfc78f...](https://gist.github.com/nicolasff/fe5668194769c175d9d4dfc78f694724)

38 bytes source code, 970 KB binary (782 KB stripped) 0m0.003s execution time,
139 syscalls (26 unique).

Seeing the list of syscalls is the most interesting part of this whole
exercise; the number of milliseconds it takes to print "hello world" is not
super relevant (except in the few cases where start-up time is painfully
long).

------
zamadatix
"Passing /dev/urandom into perl is equally likely to print “hello world”"

That gave me a good chuckle towards the end.

It'd be useful to break this out a little further as it'd have been
interesting to see how small just the output is on the dynamically linked
versions instead of just comparing static to whole dynamic bundle.

It's also a bit odd that e.g. zig gets optimized for size and stripped via the
compiler, c gets optimized for speed and stripped via strip, and Go/Crystal
just gets built standard with no stripping at all. I don't think it'd change
the big picture just a bit odd.

.

Unrelated tangent/ramble, I played with Zig and Go as part of my yearly "take
December off and tinker" break. Zig was really fun to work with but
unfortunately still in a huge churn and development. Go was a lot better than
I expected it to be (I had put off messing with Go for a few years now) and
the size of the stdlib is just astounding. In the end it wasn't as "fun" as
zig but it had very low friction and I definitely see myself using it for a
few personal projects over the next year... and then seeing if Zig has less
churn in December ;).

------
hn_throwaway_99
What is with it lately where there seem to be lots of posts fetishising
absolute performance over lots of other attributes, or even worse, pretending
those other attributes don't even matter.

What is the point of this post? Yes, I fully expect a simple Hello World in
assembly would be straightforward and fast. I still want the advantage of
things like automated memory management, an interpreter or JIT compiler where
warranted, a standard runtime environment, etc. For anything even remotely
complicated.

I get it, over the past 30-40 years we've built layers upon layers of
abstraction, so it's worth it to take a look back and ask "Are there some
cases where we overdid it?" Still, let's not throw the baby out with the
bathwater, or forget why we added those layers in the first place.

~~~
tyingq
I still find the table worthwhile, and maybe even worth looking into for the
worst outliers. It's not definitive, but the spread is interesting.

------
jagrsw
This would be a portable C (i.e. non x86-64, but still Linux) with just 2
syscalls:

    
    
      $ cat a.c
      #define _GNU_SOURCE
      #include <unistd.h>
      #include <sys/syscall.h>
    
      void _start(void) {
        syscall(__NR_write, 1, "Hello world!\n", 13);
        syscall(__NR_exit_group, 1);
      }
    
      $ gcc -nostartfiles a.c -o a -static
      $ ls -l a
      -rwxr-xr-x 1 user user 9584 jan  5 00:07 a
      $ strip -s a; ls -l a
      -rwxr-xr-x 1 user user 9000 jan  5 00:10 a
      $ strace ./a
      execve("./a", ["./a"], 0x7ffd4d4f8250 /* 67 vars */) = 0
      write(1, "Hello world!\n", 13)          = 13
      exit_group(1)                           = ?
      +++ exited with 1 +++
    

I'm not actually sure why syscall() is inlined and glibc code is not included
here while compiling with -static, but well, it works. Maybe it's because
syscall() is a macro, or maybe it's some kind of ld code minimization
(removing unused/unreachable code) trick which seems to be common in modern
static linkers (ld, gold, lld).

~~~
sandov
> portable

> #define _GNU_SOURCE

If it's portable, what's the point of defining _GNU_SOURCE? Honest question.

~~~
saagarjha
I’m not sure why they aren’t using the SYS_* constants either.

~~~
jagrsw
Yup, it'd make it (also using SYS_exit instead of SYS_exit_group) compilable
under various *BSD OSes. Doesn't work for some reason under NetBSD due to some
linking problem producing invalid binary for the platform.

    
    
      $ gcc -nostartfiles a.c -o a -Wl,--entry=_start -static
      $ ./a
      -bash: ./a: cannot execute binary file: Exec format error
      $ file a
      a: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped\
    

But the final binary is impressively small, and it seem to contain all
relevant code (body of _start and syscall funcs)

    
    
      $ strip -s a; ls -l a
      -rwxr-xr-x  1 user  users  2688 Jan  5 07:53 a
    
    

Works under FreeBSD though:

    
    
      $ clang -nostartfiles a.c -o a -static
      $ ./a
      Hello world!
      $ strip -s a; ls -l a
      -rwxr-xr-x  1 user  user  9216 Jan  5 07:59 a

------
franciscop
Random thought/question, does `process.stdout.write("Hello World");` in
Node.js make any difference? While `console.log()` is correct for this
analysis since it's the more common one, it does a lot of extra internal
logic:
[https://github.com/nodejs/node/blob/v13.x/lib/internal/conso...](https://github.com/nodejs/node/blob/v13.x/lib/internal/console/constructor.js)

Edit: I'm just curious and don't know how to even start testinng this, not
trying to promote/demote Node.js in any way.

~~~
z3t4
As a performance optimization I use stderr instead of console.log, for some
reason stderr is much faster then stdout :P And console.log is slow as hell,
mostly because it's sync, but also because it does a ton of stuff; parsing the
parameters, etc.

~~~
m45t3r
It is not that stderr is faster, it is that it is unbuffered by default (while
stdout is buffered by default). If you flush your output after printing to
stdout, you should get a similar result.

And well, I am thinking you're talking about "much faster" for small amount of
text (small enough to not fill the stdout buffer). Actually printing huge
amount of text to stdout should be much faster than printing to stderr exactly
because of this buffer.

------
tptacek
This appears to have been written by someone who thinks the point of "Hello
World" is to print the string "Hello World" as efficiently as possible. If
that were the case, Kernighan wouldn't have written the first one with
"printf()".

~~~
overgard
Why shouldn't we expect a compiler to emit the most efficient code? Isn't that
the point?

If they're not emitting the most efficient code on "hello world", that's a
thing. It's not groundbreaking, but, I'd like whatever language I choose to be
efficient with the small things as well as the large things.

I know what you're going to say. "printf does a lot more". Ok. But can I
statically analyze: printf("hello world"), and notice that it's not doing
anything interesting with zero ambiguity?

~~~
tptacek
Kernighan's compiler didn't do that, and he had puts(3) available, because he,
you know, invented it at the same time. Was he just a suboptimal C programmer?

~~~
overgard
Nice dodge, but the question is: should the compiler emit the most efficient
code possible, or not? I don't care what Kernighan's compiler did, I care
what's possible. It's been 40 years. What do we got?

~~~
tptacek
I'm sorry, I don't understand what this has to do with the point I made. I was
talking about Kernighan, not about the most efficient possible modern
implementation.

~~~
overgard
Well, maybe I missed your point then. Is your critique that the author used
"printf"?

I was responding to this statement:

> This appears to have been written by someone who thinks the point of "Hello
> World" is to print the string "Hello World" as efficiently as possible.

To me... that would straightforwardly seem to be the point of writing a hello
world program.

~~~
tptacek
My critique is that the author missed the point of "Hello World".

~~~
overgard
Ok, I'll bite: In your mind, what is the _point_ of "hello world"? Because I
think the author's perspective is that it's a "small program that does
something interesting". That seems pretty reasonable. I'm completely lost on
what you're trying to say. I don't mean that in a snarky sense. Obviously
you're a well regarded poster here and I'm interested in what you have to say.
I just have no idea what you're communicating.

~~~
tptacek
The point of "Hello world" is to be the minimum program to demonstrate a
programming environment (and verify that you have it working). Optimizing it
is a very weird idea.

~~~
kungtotte
Should the compiler get to decide arbitrarily at which point a program becomes
important enough to optimize, or should it apply the same level of
optimization (based on compiler flags etc.) constantly so that you can reason
about what will happen when you compile the code?

The blog post isn't even about how optimized the end result is. It's to get
_you_ , the programmer, to think about the cost you incur by going further
down in this table when choosing your language/environment. Sometimes that's
fine; Drew himself writes a lot of Python for example for his web stuff
because it's the best choice for it and writing it in C or Zig is a pointless
effort. The key thing to note though is that he picked Python while being well
aware of this table.

Lots of programmers today aren't aware of this table. That's the point of the
post.

~~~
cycloptic
>Should the compiler get to decide arbitrarily at which point a program
becomes important enough to optimize

It probably already does. In a real program run with a real compiler you
cannot reasonably know what the optimization process is going to do. You have
to look at the output.

>It's to get you, the programmer, to think about the cost you incur by going
further down in this table when choosing your language/environment.

>Lots of programmers today aren't aware of this table.

Please don't. You don't have to be a CS prodigy to notice the size of an
executable or the fact that a program is starting slow.

------
alaroldai
An alternate interpretation of the same data - if you're committed to writing
a program in a language that requires a runtime / does a lot of work on
startup, writing a program that prints "Hello, World!" does not make the most
effective use of that language and runtime.

Put that way, most of this post seems like a tautology: if you misuse the
tools you are given, of course you're going to get bad results!

It seems reasonable to me that a language should make the assumption that the
programmer's use case matches the languages strengths, so by default any
runtime setup / bookkeeping / teardown code should run. Failure to remove that
extra "complexity" isn't a failure of the toolchain, it's a failure of the
programmer to select the right tool for the job.

If the point the author was trying to make was that the complexity being added
is _never_ useful, this is not a post that argues that position. A cost /
benefit discussion of the specific behaviours being supported by that
complexity would be very interesting!

------
ChrisMarshallNY
That's pretty cool. It reminds me of GodBolt
([https://godbolt.org](https://godbolt.org)).

I'm told that the story behind it, is that he was arguing with someone about
the efficiency of an operation, and actually wrote that site to prove his
point.

------
lttlrck
It’s comparing unassembled assembler to JITed code and compilers that are
pulling in precompiled libraries.

I feel it needs some kind of normalizing. I get that it is illustrating bloat
but it doesn’t really illustrate where that’s coming from.

Maybe only the output of the JITs should counts, or the syscalls required to
assemble the example should be included. Are musl and glibc really wasting
cycles or are they doing something that the example is missing.

Fun to think about.

~~~
ddevault
It's deliberate that JITs, interpreters, and compiled languages are compared
on the same terms here. JITs and interpreters are fundamentally less
performant than compiled languages, they don't get a pass on performance tests
just because it's by design.

~~~
joshuamorton
> JITs

This is...highly context dependent. For highly polymorphic code, my
understanding is that JITs can outperform precompiled binaries, since they can
inline virtual/polymorphic calls in tight loops.

This also isn't a "performance" test in any real sense. It's a test of startup
time. Where, yes, JITs lose, but unless you're writing short lived interactive
command line tools, or something that runs on lambda, that shouldn't be a
concern. For "normal" serverside or desktop apps that run for more than, say,
30 seconds, the difference between 0s of startup and 0.2s of startup time is
literally in the noise.

~~~
ddevault
Startup time is the least interesting metric in this article. The more
interesting metric is the number of syscalls. This isn't a measure of
performance, it's a measure of complexity and busywork. Complexity tends to
indirectly affect performance, but that's not the point of the article.

~~~
joshuamorton
Complexity of _what_?

The resulting generated binary? Well no, a python binary is smaller than the c
binary. The toolchain? Well gcc is pretty complex and that's unaccounted for.
The build process? Again, no.

The closest thing I can think of is the language runtime. But why do I care
about how complex the language runtime is? Often more complex language
runtimes make my life easier anyway, and they're all sitting atop the Intel
microcode magic box anyway.

There's a very specific definition of complexity you're using, and I'm still
not sure what it is. In my world, you usually add complexity to eek out extra
performance by breaking the less complex abstractions.

~~~
ddevault
The compleixty of the _system_.

~~~
joshuamorton
I'm still confused. Why is runtime compilation a component of the "system",
but aot compilation is not? Why are python interpreters a component of the
system, but microcode interpreters and aot compilation not?

If you haven't given a clear definition of what "the system" is, I can't
really use your evaluation to influence my decision making.

------
emmanueloga_
I feel like printing "hello world" with, say, java, is like picking up your
supermarket groceries on an Scania truck. All that extra machinery is there
for when you need to tackle harder problems, but may represent a lot of
overhead for super simple ones.

------
skybrian
It's an interesting table, but what you really want to know is how much of the
standard library gets linked in as soon as you do anything interesting. For
example, if using one printf links in a lot and you almost always use at least
one printf (or use a library that does), then the size you get when you
entirely avoid printf doesn't matter, because that's not a practical thing to
avoid. But you might want to know the minimum size _with_ printf.

Figuring how where the binary size cliffs are (what causes size to grow a lot)
and which cliffs it's practical to avoid might be useful.

------
tyingq
Perl seems to lead the pack for interpreted langauges by a wide margin for
this microbenchmark. I wonder if that's just for the narrow case of print() /
hello-world.

~~~
kungtotte
I think it holds true for any kind of string management. It's kind of why Perl
exists in the first place, to take one bunch of strings and turn them into
another, probably printing them out somehow.

It stands to reason that the default use case of Perl is well optimized.

------
swiftcoder
Rust drops to 16KB (on a Mac, I suspect nearer 8KB on Linux) if you set
panic=abort, turn on LTO, and drop the std library:

    
    
      #![feature(start, lang_items)]
      #![no_std]
      #![no_main]
    
      #[link(name = "c")]
      extern "C" {
          pub fn puts(s: *const u8);
      }
      
      #[no_mangle]
      pub extern "C" fn main(_argc: i32, _argv: *const *const u8) -> i32 {
          unsafe {
              puts(b"hello world\n" as *const u8);
          };
          0
      }
      
      #[panic_handler]
      fn panic(_info: &core::panic::PanicInfo) -> ! {
          loop {}
      }
      
      #[lang = "eh_personality"]
      extern "C" fn eh_personality() {}
    
    

2x assembly isn't bad, but I'm willing to bet we can go smaller.

~~~
steveklabnik
[https://github.com/tormol/tiny-rust-
executable](https://github.com/tormol/tiny-rust-executable)

(See also
[http://mainisusuallyafunction.blogspot.com/2015/01/151-byte-...](http://mainisusuallyafunction.blogspot.com/2015/01/151-byte-
static-linux-binary-in-rust.html) which was the original implementation, this
shaved a few more bytes off)

------
abjKT26nO8
_> The following assembly program is the ideal Linux x86_64 program for this
purpose. A perfect compiler would emit this hello world program for any
language._

I.e. you want your compiler to emit code that isn't portable across unix-like
systems (and their future versions) in any degree? AFAIK Linux is the only
unix-like system that guarantees ABI stability.

I'm curious what Zig does in this case that it got so "good" results. Does it
forgo portability?

~~~
saagarjha
Linux binaries should take advantage of the stability of the platform they’re
on, surely?

------
based2
[http://www.rosettacode.org/wiki/Hello_world/Text](http://www.rosettacode.org/wiki/Hello_world/Text)

------
alberth
I’m curious to know how NIM performed given it transcodes into C.

~~~
zamadatix
Someone commented a test shortly after you asked
[https://news.ycombinator.com/item?id=21957476](https://news.ycombinator.com/item?id=21957476)

~~~
dom96
I’m actually curious why Nim didn’t make the list. Crystal is there and a lot
of other emerging languages. Is the author not familiar with it?

~~~
kick
Drew's familiar with nim, and he's talked with you about it before on HN.
Crystal was volunteered by someone working on Crystal, Julia was volunteered
by a dev who works using Julia, etc. Haskell was also presented, I think, but
Haskell was a mess.

------
Arrezz
What is the history of the Hello World example program? I'm not sure why it's
become this defacto example of syntax for every language. Maybe there is
something else we could use as an example that would be more indicative of
syntax but perhaps I'm overthinking it.

~~~
imglorp
Brian Kernighan, 1972, A Tutorial Introduction to the Language B.

And yeah, I think it's an okay intro to syntax because at least it shows you
the minimum boilerplate to get some output.

[https://en.wikipedia.org/wiki/%22Hello,_World!%22_program](https://en.wikipedia.org/wiki/%22Hello,_World!%22_program)

------
ianopolous
I'm curious what method was used to compile the Java version. I'm assuming it
was jlink? If you use graalvm's native image then last time I checked the
output was 6 MiB and runtime ~1 ms. And it may be much better now with Java
11.

------
NilsIRL
It would be interesting to know why the number of syscalls for C are so high.

~~~
BearOso
I’m wondering why his dynamic glibc C executable is so big.

~~~
kungtotte
He's counting glibc too, not just the binary. This is true for all of the
examples, the size is the binary itself plus whatever system/libraries you'd
need to actually run the binary.

------
marcosscriven
Curious what all the syscalls Rust is making, and why?

~~~
steveklabnik
One reason is that println! will lock stdout for you. I'm not sure what
percentage that makes up.

If you don't want that behavior, you can control this all yourself with write!
and friends.

EDIT: deeper analysis on Reddit:
[https://www.reddit.com/r/programming/comments/ejxwlu/hello_w...](https://www.reddit.com/r/programming/comments/ejxwlu/hello_world/fd3zr9p/)

------
zbentley
> Passing /dev/urandom into perl is equally likely to print “hello world”

I chuckled.

------
tylermenezes
I don't know much about Julia. Why is the binary over a third a GiB?

~~~
ChrisRackauckas
Julia by default is loading a system image which is made for scientific
computing, so linear algebra support, a package manager, distributed parallel
computing, etc. are all part of that. You can build a system image without all
of that of course, but given its audience it's a good default choice.

------
z3t4
What is the command used to get total and unique syscalls!? strace wc !?

------
pyb
test.c compiles to 6 kilobytes on my Linux Mint machine... how did he get it
to compile to more than 2 megabytes?

~~~
kick
Run ldd on it, the "true" size will probably paint a different picture.

------
fulafel
Anyone want to do Clojure + GraalVM?

------
gpsx
And then, someone went and invented the graphical user interface. So many
precious machine cycles wasted rendering graphics for people who are too lazy
to use the command line like they are supposed to. Shhheez.

