
“What next?” - yomritoyj
http://graydon2.dreamwidth.org/253769.html
======
fulafel
Again my pet ignored language/compiler technology issue goes unmentioned: data
layout optimizations.

Control flow and computation optimizations have enabled use of higher level
abstractions with little or no performance penalty, but at the same time it's
almost unheard of to automatically perform (or even facilitate) the data
structure transformations that are daily bread and butter for programmers
doing performance work. Things like AoS->SoA conversion, compressed object
references, shrinking fields based on range analysis,
flattening/dernormalizing data that is used together, converting cold struct
members to indirect lookups, compiling different versions of the code for
different call sites based on input data, etc.

It's baffling considering that everyone agrees memory access and cache
footprint are the current primary perf bottlenecks, to the point that experts
recommend considering on-die computation is free and counting only memory
accesses in first-order performance approximations.

~~~
jnordwick
I've been thinking about this for the last few years. What would an APL-like
language look like with structured data? Is that possible? Could you make a
language where you specify if a value is SoA or AoS? Is it possible to
automatically convert an AoS-based algorithm to SoA?

It really changes how you do basic things like sorting. In the standard AoS
approach in C-like languages you swap entire structures around the array. In
an SoA approach in APL-like languages you generate a list of indices that
would put the data in sorted order then apply it to each column. A number of
times I written code to do this in C++ for high-performance systems, and it
works great, but is definitely a different way of thinking about things.

~~~
kd0amg
_Is it possible to automatically convert an AoS-based algorithm to SoA?_

Does this require anything more than transposing some struct member accesses?
IIRC, Futhark already does that kind of conversion.

~~~
jnordwick
You could do a fairly straightforward conversion but it would be terribly slow
for many things.

Take the sorting example i gave. In a column-oriented program you would
generate a list of indices and apply them to each column. That has very good
cache locality as you work on each independently. Also you tend to manipulate
that index list instead of the data and after all is done, filtered, and
whatever then apply it to the data.

If you did things the AoS way you would be bouncing around in memory going
from column to column.

Dealing with SoA and column orientated data is more than just a storage
decision.

~~~
kd0amg
_You could do a fairly straightforward conversion but it would be terribly
slow for many things._

Is there a non-straightforward conversion that isn't? It seems to me like any
SoA->AoS transformation is going to have pretty much the same effect on
locality.

~~~
jnordwick
> It seems to me like any SoA-AoS transformation is going to have pretty much
> the same effect on locality

Not at all. In a SoA system you want to operate on each attribute individually
but on an AoS system you operates across structs. Besides improved cache
locality it keeps loops small so they operate out of the iop loop/stream cache
among other things like better packing off data etc.

If you haven't worked on a system little this sometimes it can be difficult to
see how different things can be. It's literally APL vs C.

------
z1mm32m4n
Grayson's very first answer to "what's next" is "ML modules," a language
feature probably few people have experienced first hand. We're talking about
ML-style modules here, which are quite precisely defined alongside a language
(as opposed to a "module" as more commonly exists in a language, which is just
a heap of somewhat related identifiers). ML modules can be found in the
mainstream ML family languages (Standard ML, Ocaml) as well as some lesser
known languages (1ML, Manticore, RAML, and many more).

It's really hard to do justice explaining how amazing modules are. They
capture the essence of abstraction incredibly well, giving you plenty of
expressive power (alongside an equally powerful type system). Importantly,
they compose; you can write functions from modules to modules!

(This is even more impressive than you think: modules have runtime (dynamic)
AND compile time (static) components. You've certainly written functions on
runtime values before, and you may have even written functions on static types
before. But have you written one function that operates on both a static and a
dynamic thing at the same time? And what kind of power does this give you?
Basically, creating abstractions is effortless.)

To learn more, I recommend you read Danny Gratzer's "A Crash Course on ML
Modules"[1]. It's a good jumping off point. From there, try your hand at
learning SML or Ocaml and tinker. ML modules are great!

[1]:
[https://jozefg.bitbucket.io/posts/2015-01-08-modules.html](https://jozefg.bitbucket.io/posts/2015-01-08-modules.html)

~~~
cmrx64
1ML just shows that advanced (first-class) ML modules are really just System
F_omega. That's just barely more than Haskell! More languages need to expose
it nicely. But we already have most if not all the technology we need to make
them _efficient_ and _worthwhile_ , vs just possible. This isn't a research
problem. This is a design/HCI problem.

~~~
catnaroek
ML modules are great, but I'm not convinced that making them first-class (1ML)
will be equally great. ML hits a sweet spot between automation (full type
inference) and expressiveness that is only possible because the type language
is first-order. This is IMO the point that most extensions on top of Hindley-
Milner completely miss.

So, if anything, what I'd like to see is a dependently typed language whose
type language is a first-order, computation-free (no non-constructor function
application) subset of the value language.

~~~
ratmice
The 1ML papers discuss this, ML implementations in reality are already
incomplete with regard to Hindley-Milner, a quote:

"We show how Damas/Milner-style type inference can be integrated into such a
language; it is incomplete, but only in ways that are already present in
existing ML implementations."

------
Animats
One big problem we're now backing into is having incompatible paradigms in the
same language. Pure callback, like Javascript, is fine. Pure threading with
locks is fine. But having async/await and blocking locks in the same program
gets painful fast and leads to deadlocks. Especially if both systems don't
understand each other's locking. (Go tries to get this right, with unified
locking; Python doesn't.)

The same is true of functional programming. Pure functional is fine. Pure
imperative is fine. Both in the same language get complicated. (Rust may have
overdone it here.)

More elaborate type systems may not be helpful. We've been there in other
contexts, with SOAP-type RPC and XML schemas, superseded by the more casual
JSON.

Mechanisms for attaching software unit A to software unit B usually involve
one being the master defining the interface and the other being the slave
written to the interface. If A calls B and A defines the interface, A is a
"framework". If B defines the interface, B is a "library" or "API". We don't
know how to do this symmetrically, other than by much manually written glue
code.

Doing user-defined work at compile time is still not going well. Generics and
templates keep growing in complexity. Making templates Turing-complete didn't
help.

~~~
mpweiher
> incompatible paradigms

See _Architectural Mismatch or, Why it 's hard to build systems out of
existing parts_[1]

Yes, we are not good at it, but we need to do it. Very often, the dominant
paradigm is not appropriate for the application at hand, and often no single
paradigm is appropriate for the entire application.

For example, UI programming does not really fit call/return well at all[2].

> attaching software unit A to software unit B .. master/slave

Case in point: this is largely due to the call/return architectural style
being so incredibly dominant that we don't even see it as a distinct style,
with alternatives. I am calling it 'The Gentle Tyranny of Call/Return'.

[1]
[http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_...](http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_abstracts/archmismatch-
icse17.html)

[2]
[http://dl.ifip.org/db/conf/ehci/ehci2007/Chatty07.pdf](http://dl.ifip.org/db/conf/ehci/ehci2007/Chatty07.pdf)

~~~
Animats
_See Architectural Mismatch or, Why it 's hard to build systems out of
existing parts_

If you want to study that, look at ROS, the Robot Operating System. ROS is a
piece of middleware for interprocess communication on Linux, plus a huge
collection of existing robotics, image processing, and machine learning tools
which have been hammered into using that middleware. The dents show. There's
so much dependency and version pinning that just installing it without
breaking a Ubuntu distribution is tough. It does sort of work, and it's used
by many academic projects.

In a more general sense, we don't have a good handle on "big objects".
Examples of "big objects" are a spreadsheet embedded in a word processor
document or a SSL/TLS system. Big objects have things of their own to do and
may have internal threads of their own. We don't even have a good name for
these. Microsoft has Object Linking and Embedding and the Common Object Model,
which date from the early 1990s and live on in .NET and the newer Windows
Runtime. These are usually implemented, somewhat painfully, through the DLL
mechanism, shared memory, and through inter-process communication. All this is
somewhat alien to the Unix/Linux world, which never really had things like
that except as emulations of what Microsoft did.

"Big object" concepts barely exist at the language level. Maybe they should.

------
borplk
I'd say the elephant in the room is graduating beyond plaintext (projectional
editor, model-based editor).

If you think about it so many of our problems are a direct result of
representing software as a bunch of files and folders with plaintext.

Our "fancy" editors and "intellisense" only goes so far.

Language evolution is slowed down because syntax is fragile and parsing is
hard.

A "software as data model" approach takes a lot of that away.

You can cut down so much boilerplate and noise because you can have certain
behaviours and attributes of the software be hidden from immediate view or
condensed down into a colour or an icon.

Plaintext forces you to have a visually distracting element in front of you
for every little thing. So as a result you end up with obscure characters and
generally noisy code.

If your software is always in a rich data model format your editor can show
you different views of it depending on the context.

So how you view your software when you are in "debug mode" could be wildly
different from how you view it in "documentation mode" or "development mode".

You can also pull things from arbitrarily places into a single view at will.

Thinking of software as "bunch of files stored in folders" comes with a lot
baggage and a lot of assumptions. It inherently biases how you organise
things. And it forces you to do things that are not always in your interest.
For example you may be "forced" to break things into smaller pieces more than
you would like because things get visually too distracting or the file gets
too big.

All of that stuff are arbitrary side effects of this ancient view of software
that will immediately go away as soon as you treat AND ALWAYS KEEP your
software as a rich data model.

Hell all of the problems with parsing text and ambiguity in sytnax and so on
will also disappear.

~~~
beagle3
This claim is often repeated, but I haven't seen it substantiated even once.
It's possible that no one has yet come up with the right answer that
"obviously there". But it is also possible that this claim is not true, and a
tend to believe the latter more as time passes.

Every attempt that I've seen, e.g. Lamdu, Subtext, any "visual app builder",
all fail miserably at delivering ANY benefit except for extremely simple
programs -- while at the same time, taking away most of the useful tools we
already have like "grep", "diff", etc. Sure, they can be re-implemented in the
"rich data model", perhaps even better than their textual ancestors - but the
thing is, that they HAVE to be re-implemented, independently for each such
"rich data model", or you can't have their functionality at all -- whereas as
1972 "diff" implementation is still useful for 2017 "pony", a language with
textual representation.

regarding your example, the "breaking things into smaller pieces" was solved
long ago by folding editors (I used one on an IBM mainframe in 1990, I suspect
Emacs already had it at the same time, it did for sure in 1996).

the problems with "parsing and ambiguity" are self inflicted, independent of
whether the representation is textual. Lisp has no ambiguity, Q (the K syntax
sugar) has no ambiguity. Both languages eschew operator precedence, by the
way, because THAT is the real issue that underlies modern syntax ambiguities.

I've been waiting for that amazing "software as a data model" approach to show
a benefit for almost 30 years now (There's been an attempt nearly every year I
looked). Where it has (e.g. Lisp, Forth), it's completely orthogonal to the
textual representation.

~~~
iso-8859-1
One advantage I dream of with Lamdu is snapshotting application state and
having values annotated in the boxes next to their definition. You could edit
the code in the snapshot and see how it propagates through pure parts of the
program. It if works, you'd apply the change to the running system.

How do you do this with a text driven program? Typically, there are so many
files and unspecified build systems, so you can't view your program as a
proper tree as in Lamdu. You can't even treat compilation as a pure function.
You can't take a snapshot of an arbitrary program because effects are not
isolated. You might not even be able to set breakpoints without breaking the
program. If it's machine code you might not even have debug symbols. It's such
a mess, and it's different on every platform.

So you claim there is no benefit to be had. I claim the existing solutions are
already failing. If one solution gets enough of these things right, there is
no reason why it wouldn't succeed just because text-driven. (But it is also
possible to do it correctly with text)

If the program code is always to be consistent, it _cannot_ be text, because
text allows you to input invalid programs. So how is text not a leaky
abstraction?

~~~
beagle3
> How do you do this with a text driven program?

If you haven't watched
[https://vimeo.com/36579366](https://vimeo.com/36579366) , I highly recommend
it. Chris Granger, after watching this, created [http://www.chris-
granger.com/2012/02/26/connecting-to-your-c...](http://www.chris-
granger.com/2012/02/26/connecting-to-your-creation/) which you can download,
and applies to ClojureScript, which is (tada!) text based.

Also, I had a chance to use time traveling virtual machine (which works
independently of how your code was produced) in 2007, and while I haven't had
a chance to use them, I know that source-aware time traveling debuggers have
made huge strides recently.

> So you claim there is no benefit to be had. I claim the existing solutions
> are already failing.

These claims are not contradictory. I agree that existing solutions are
failing in various ways, but I still disagree that the "code as rich data"
would provide benefit.

> If one solution gets enough of these things right, there is no reason why it
> wouldn't succeed just because text-driven. (But it is also possible to do it
> correctly with text)

So, if possible to do correctly when text driven, why is text driven a
problem?

> If the program code is always to be consistent, it cannot be text, because
> text allows you to input invalid programs. So how is text not a leaky
> abstraction?

All programming abstractions are leaky in one way or another, and text is no
different. In my opinion, the leak that it is possible, during design time, to
have an invalid program (which will be rejected as such prior to execution) is
not a real problem. The requirement that "program code always be consistent"
is onerous, and, in my opinion, harmful.

When I write code, I often start with sketches that cannot work and cannot
compile while I think things through, and then mold them into working code.
Think about it as converting comments to code. If an editor didn't let me do
that (and likely wouldn't provide any help, because I'm writing comment prose
and not "rich data" code ...), I'd open a text editor to write my plans. text
editor.

I will restate my belief in a different way: Any benefit from "code as data"
that cannot be applied to a textual representation, only ever manifests in
extremely simple cases. I offer as support for this belief, the history of
visual and rich data tools over the last 30 years, none of which manages to
scale as well as text based processes, most of them scaling significantly
worse (which is often explained away by critical mass, but there are enough
successful niche tools that I think this explanation is unacceptable).

------
gavanwoolery
I like to read about various problems in language design, as someone who is
relatively naive to its deeper intricacies it really helps broaden my view.
That said I have seen a trend towards adding various bells and whistles to
languages without any sort of consideration as to whether it actually, in a
measurable way, makes the language better.

The downside to adding an additional feature is that you are much more likely
to introduce leaky abstraction (even things as minor as syntactical sugar).
Your language has more "gotchas", a steeper learning curve, and a higher
chance of getting things wrong or not understanding what is going on under the
hood.

For this reason, I have always appreciated relatively simple homoiconic
languages that are close-to-the-metal. That said, the universe of tools and
build systems around these languages has been a growing pile of cruft and
garbage for quite some time, for understandable reasons.

I envision the sweet spot lies at a super-simple system language with a
tightly-knit and extensible metaprogramming layer on top of it, and a
consistent method of accessing common hardware and I/O. Instant recompilation
("scripting") seamlessly tied to highly optimized compilation would be ideal
while I am making a wishlist :)

~~~
ColanR
Ever heard of K?

~~~
kbenson
Is there a free version? I was interested I was interested in learning it a
few years ago but couldn't find a version that was free for both commercial
and non-commercial use, which tempered my enthusiasm.

~~~
grayrest
To my knowledge, there isn't a full OSS implementation of K/Q. I do know of a
K5 interpreter in JS [1] but otherwise I've only been able to find the free
for personal use 32 bit Q distribution.

[1] [https://github.com/JohnEarnest/ok](https://github.com/JohnEarnest/ok)

~~~
pja
Kona implements K3 according to its github.

------
carussell
All this and handling overflow still doesn't make the list. Had it been the
case that easy considerations for overflow were baked into C back then, we
probably wouldn't be dealing with hardware where handling overflow is even
more difficult than it would have been on the PDP-11. (On the PDP-11, overflow
would have trapped.) At the very least, it would be the norm for compilers to
emulate it whether there was efficient machine-level support or not. However,
that didn't happen, and because of that, even Rust finds it acceptable to punt
on overflow for performance reasons.

~~~
Animats
_On the PDP-11, overflow would have trapped._

On the DEC VAX, overflow could be set to trap, based on a bit mask at the
beginning of each function, but that was not the case on the PDP-11. Nobody
used that feature. I once modified the C compiler for the VAX to make integer
overflow trap, and rebuilt standard utilities. Most of them trapped on integer
overflow.

If you want arithmetic to wrap, you should have to write something like

    
    
        n := (n + 1) mod (2^32);
    

Let the optimizer figure out there's a cheap way to do that.

------
mcguire
[Aside: Why do I have the Whiley
([http://whiley.org/about/overview/](http://whiley.org/about/overview/)) link
marked seen?]

I was mildly curious why Graydon didn't mention my current, mildly passionate
affair, Pony ([https://www.ponylang.org/](https://www.ponylang.org/)), and its
use of capabilities (and actors, and per-actor garbage collection, etc.).
Then, I saw,

" _I had some extended notes here about "less-mainstream paradigms" and/or
"things I wouldn't even recommend pursuing", but on reflection, I think it's
kinda a bummer to draw too much attention to them. So I'll just leave it at a
short list: actors, software transactional memory, lazy evaluation,
backtracking, memoizing, "graphical" and/or two-dimensional languages, and
user-extensible syntax._"

Which is mildly upsetting, given that Graydon is one of my spirit animals for
programming languages.

On the other hand, his bit on ESC/dependent typing/verification tech. covers
all my bases: " _If you want to play in this space, you ought to study at
least Sage, Stardust, Whiley, Frama-C, SPARK-2014, Dafny, F∗, ATS, Xanadu,
Idris, Zombie-Trellys, Dependent Haskell, and Liquid Haskell._ "

So I'm mostly as happy as a pig in a blanket. (Specifically, take a look at
Dafny
([https://github.com/Microsoft/dafny](https://github.com/Microsoft/dafny))
(probably the poster child for the verification approach) and Idris
([https://www.idris-lang.org/](https://www.idris-lang.org/)) (voted most
likely to be generally usable of the dependently typed languages).

~~~
DonbunEf7
Indeed, I would say that capability-safety is the next plateau after memory-
safety, but most languages are only barely glimpsing the capability horizon.

~~~
jerf
This is personally one of my major pet peeves, because I like to write secure
code, and I think you very rapidly end up half-reconstructing capabilities
work if you do, and it is not hard to notice that you basically get _no_ help
whatsoever from languages. It is still clearly a nearly-invisible problem to
the vast majority of developers. You need something almost Haskell class
simply to assert "Please, at least provide me some evidence that you ran
_some_ sort of permission check." to access a resource.

~~~
zerker2000
For "you ran _some_ sort of permission check" I'm a great fan of the React.js
`dangerouslySetInnerHTML` interface, which accepts a corresponding `{__html:
...}` object that has at least nominally been properly vetted.

------
mcguire
" _Writing this makes me think it deserves a footnote / warning: if while
reading these remarks, you feel that modules -- or anything else I'm going to
mention here -- are a "simple thing" that's easy to get right, with obvious
right answers, I'm going to suggest you're likely suffering some mixture of
Stockholm syndrome induced by your current favourite language, Engineer
syndrome, and/or Dunning–Kruger effect. Literally thousands of extremely
skilled people have spent their lives banging their heads against these
problems, and every shipping system has Serious Issues they simply don't deal
with right._"

Amen!

------
statictype
So Graydon works at Apple on Swift?

Wasn't he the original designer of Rust and employed at Mozilla?

Surprised that this move completely went under my radar

~~~
steveklabnik
That's how he likes it, to be honest. He doesn't want it to be some statement
about Rust or Swift; he left Rust long ago, did some other stuff for a while,
and there's only so many "work on a programming language" jobs out there.

In general the Rust and Swift teams are very friendly with one another, and
have shared several devs.

~~~
statictype
Sure. He was very impressively neutral and rational about Monotone as well.

------
rtpg
The blurring of types and values as part of the static checking very much
speaks to me.

I've been using Typescript a lot recently with union types, guards, and other
tools. It's clear to me that the type system is very complex and powerful! But
sometimes I would like to make assertions that are hard to express in the
limited syntax of types. Haskell has similar issues when trying to do type-
level programming.

Having ways to generate types dynamically and hook into typechecking to check
properties more deeply would be super useful for a lot of web tools like ORMs.

~~~
mcguire
Check Idris ([https://www.idris-lang.org/](https://www.idris-lang.org/)) and
if you have a chance, work through the Idris book, _Type Driven Development
with Idris_ ([https://www.manning.com/books/type-driven-development-
with-i...](https://www.manning.com/books/type-driven-development-with-idris)).

------
bjz_
I would love to see some advancements into distributed, statically typed
languages that can be run on across cluster, and that would support type-safe,
rolling deployments. One would have to ensure that state could be migrated
safely, and that messaging can still happen between the nodes of different
versions. Similar to thinking about this 'temporal' dimension of code, it
would be cool to see us push versioning and library upgrades further, perhaps
supporting automatic migrations.

~~~
doublec
Have you seen Alice ML?

It's a Standard ML variant that includes serialization of types and support
for distributed programming.

[1] [http://www.ps.uni-
saarland.de/alice/manual/tour.html#distrib...](http://www.ps.uni-
saarland.de/alice/manual/tour.html#distribution)

~~~
bjz_
How does it handle heterogeneous clusters that might have different versions
of code running? Or is it like Cloud Haskell where you have to bring
everything down when deploying?

------
dom96
Interesting to see the mention of effect systems. However, I am disappointed
that the Nim programming language wasn't mentioned. Perhaps Eff and Koka have
effect systems that are far more extensive, but as a language that doesn't
make effect systems its primary feature I think Nim stands out.

Here is some more info about Nim's effect system: [https://nim-
lang.org/docs/manual.html#effect-system](https://nim-
lang.org/docs/manual.html#effect-system)

------
hderms
Fantastic article. This is the kind of stuff I go to Hacker News to read. Had
never even heard of half of these conceptual leaps.

------
simonebrunozzi
I would have preferred a more informative HN title, instead of a semi-
clickbaity "What next?", e.g.

"The next big step for compiled languages?"

~~~
tedunangst
Not everyone writes exclusively for the HN audience.

~~~
ekiru
I think this is more of a complaint about HN's title policies.

------
msangi
It's interesting that he doesn't want to draw too much attention to actors
while they are prominent in Chris Lattner's manifesto for Swift [1]

[1]
[https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9...](https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782)

~~~
mcguire
Actors are a bit of a off-the-wall paradigm
([https://patterns.ponylang.org/async/actorpromise.html](https://patterns.ponylang.org/async/actorpromise.html)),
and I'm (as an old network protocol guy) not sure I'm happy with the attempts
to make message passing look like sequential programming (like async/await).

I kinda see where Graydon is coming from. I have this broken half-assed Pony
parser combinator _thing_ staring at me right now.

------
ehnto
I know I am basically dangeling meat into lions den with this question; How
has PHP7 done in regards to the Modules section or modularity he speaks of?

I am interested in genuine and objective replies of course.

(Yes your joke is probably very funny and I am sure it's a novel and exciting
quip about the state of affairs in 2006 when wordpress was the flagship
product)

~~~
thaumasiotes
> the state of affairs in 2006 when wordpress was the flagship product

Does PHP have a different flagship product now?

~~~
TazeTSchnitzel
It doesn't. WordPress is still the elephant (heh, elePHPant…) in the room.

PHP has other significant projects built on it of course, but WordPress is a
behemoth nothing can hope to beat.

------
ilaksh
I think at some point we will get to projection editors being mainstream for
programming, and eventually things that we normally consider user activities
will be recognized as programming when they involve Turing complete
configurability. This will be an offshoot of more projection editing.

I also think that eventually we may see a truly common semantic definitional
layer that programming languages and operating systems can be built off of.
It's just like the types of metastructures used as the basis for many
platforms today, but with the idea of creating a truly Uber platform.

Another futuristic idea I had would be a VR projectional programming system
where components would be plugged and configured in 3d.

Another idea might be to find a way to take the flexibility of advanced neural
networks and make it a core feature of a programming language.

------
lazyant
What would be a good book / website to learn the concepts & nomenclature in
order to understand the advanced language discussions in HN like this one?

~~~
hyperpape
I can't attest to either, having not read them, but you might try Bob Harper's
"Practical Foundations for Programming Languages", or Benjamin Pierce "Types
and Programming Languages".

~~~
EvilTerran
I can vouch for TaPL - I can see it on my bookshelf from here. It's a great
overview of the foundational concepts in PLT; Pierce's style is remarkable, he
writes the most readable academic text I've ever seen.

(I also own & admire his _Basic Category Theory for Computer Scientists_ ; I'm
yet to brave his _Advanced Topics in TaPL_ \- I'm sure it's very good, but the
contents list intimidates me.)

------
leeoniya
it's interesting that Rust isn't mentioned once in his post. i wonder if he's
disheartened with the direction his baby went.

~~~
tomjakubowski
People often speculate about this, which boggles me since graydon remarks
positively on the current state of Rust all the time.

[http://graydon2.dreamwidth.org/247406.html](http://graydon2.dreamwidth.org/247406.html)

[https://twitter.com/graydon_pub/status/886212237987368960](https://twitter.com/graydon_pub/status/886212237987368960)

He even addresses the changes to the runtime:

[https://twitter.com/graydon_pub/status/705226109856526336](https://twitter.com/graydon_pub/status/705226109856526336)

~~~
adwhit
Just to add (written just after 1.0):
[http://graydon2.dreamwidth.org/214016.html](http://graydon2.dreamwidth.org/214016.html)

I think the above post indicates that he is extremely proud of how Rust turned
out.

------
jancsika
I'm surprised build time wasn't on the list.

Curious and can't find anything: what's the most complex golang program out
there, and how long does it take to compile?

~~~
pebers
jujud is commonly used as a benchmark for the Go compiler / linker due to
being probably the biggest open source project. Can't find the actual compile
times right now though, only relative figures.

------
ehudla
LtU thread: [http://lambda-the-ultimate.org/node/5466](http://lambda-the-
ultimate.org/node/5466)

------
touisteur
About units and dimensionality, AdaCore tried something in Ada with GNAT :
[http://www.adacore.com/adaanswers/gems/gem-136-how-tall-
is-a...](http://www.adacore.com/adaanswers/gems/gem-136-how-tall-is-a-
kilogram/) and [http://www.christ-usch-
grein.homepage.t-online.de/Ada/Dimens...](http://www.christ-usch-
grein.homepage.t-online.de/Ada/Dimension/Physical_units_with_GNAT_GPL_2013-AUJ35.1.pdf)

------
AstralStorm
Extra credit for whoever implements logic proofs on concurrent applications.

~~~
pron
Languages designed for safety-critical realtime applications do just that.
Well, they don't use _proofs_ as proofs generally have a bad cost/benefit
ratio (even for sequential programs), but they do use temporal logics that
easily deal with concurrency, which they then verify mostly automatically.
Temporal logics were introduced to CS in the late '70s, have resulted in at
least two Turing awards, and have enjoyed some success in industry.

~~~
mcguire
Do you have links?

I'm aware of TLA+ and (Jay Misra's) Unity
([https://en.wikipedia.org/wiki/UNITY_(programming_language)](https://en.wikipedia.org/wiki/UNITY_\(programming_language\))),
but I haven't seen any other uses of temporal logics. (The lengths people will
go to in order to get away from box and diamond...)

~~~
pron
Take a look at Esterel[1], and its current incarnation, SCADE[2], which is
used for avionics.

[1]:
[https://en.wikipedia.org/wiki/Esterel](https://en.wikipedia.org/wiki/Esterel)

[2]: [http://www.esterel-technologies.com/products/scade-
suite/](http://www.esterel-technologies.com/products/scade-suite/)

------
platz
whats wrong with software transactional memory?

~~~
qznc
Nobody figured out how to implement it efficiently yet.

~~~
marcosdumay
Haskell's STM is not efficient?

~~~
qznc
Haskell is not efficient with all these thunks and memory allocations.

Maybe the way Haskell does STM is efficient relatively. However, you cannot
port that to C/C++/Java/C#, because they do not provide the type system to
make it safe.

Maybe Rust can deliver. A quick search gives me rust-stm [0]. All these
disclaimers in the README looks discouraging, though. For example, "Calls to
atomically should not be nested", but isn't that the whole point about
_composable_ STM?

Repeating transactions is a no-go, because it does not mix well with side
effects. Thus, we need a lock-everything-required scheme. You do not want to
expose this in the type system, because it explodes everywhere and gets really
tiresome. Thus you need a whole-world analysis to find out what is "required"
for a transaction. That breaks modular compilation and is inconvenient in
general. In general, I don't see how this could work even a very high level.

[0] [https://github.com/Marthog/rust-stm](https://github.com/Marthog/rust-stm)

~~~
platz
Yes, in a non-pure type system, you are "on your honor" to not cause side
effects. Clojure made STM a first-class language feature despite this
[https://clojure.org/reference/refs](https://clojure.org/reference/refs)

Second, the problems with impurity you listed are exactly solved by Haskell.
Just because "you cannnot port that to x/y/z" language should've be counted
against the haskell language.

We are talking about transactions here, but you're worried about thunks and
memory allocations, and comparing this to java? It seems like you've simply
decided Haskell is categorically slow, regardless of the context we're talking
about.

------
rurban
No type system improvements to support concurrency safety?

~~~
iso-8859-1
I think dependent types is next big step, and it also blurs the line between
compilation and runtime. This will enable applications to do domain specific
optimizations since types become so much more expressive. Purity allows for
automatic parallelization. What Edward Kmett does in C++ here:
[https://www.youtube.com/watch?v=KzqNQMpRbac](https://www.youtube.com/watch?v=KzqNQMpRbac)
could be the default for all computation. On a modern computer, data
parallelism is free, since SIMD instructions take just as long as any other
instruction.

------
baby
Can someone edit the title to something clearer? Thanks!

