
Show HN: A dependently-typed programming language with static memory management - u2zv1wx
https://github.com/u2zv1wx/neut
======
lukevp
You are almost 3500 commits in to this project already, with no other
contributors? The dedication to this is incredible. It sounds super compelling
and I wish you luck!

I hope you get more recognition to encourage you to continue. These new
languages are so important in pushing forward our tooling and our
understanding of workable abstractions as an industry. I am only recently
getting into functional programming and it has already fundamentally changed a
lot of my perspective on OO, composition vs inheritance, immutability, pure
functions, etc.

Have you considered trying to make this a little more accessible (a bit of a
focus on the marketing side?) I would really like to digest the main benefits
of your language more easily. One example is that when skimming your readme,
the first thing my eye is drawn to is the bulleted section which talks about
other limited memory management solutions. But if I didn’t read the small text
before it that says “neut doesn’t use these” I do not understand the
compelling feature of the language to dive in further.

Have you followed Zig Lang? Andrew Kelly is doing something similar (in so far
as he’s building a new language focused on memory management) and even though
I don’t use it, I see the value in this work and support him on Patreon.

I would be happy to help you with reviewing the copy on the readme from the
perspective of someone who is technical but not super knowledgeable in this
domain to help you summarize the key concepts and advantages up front. Reach
out to me with the email in my profile if you would like to discuss!

~~~
parentheses
We've entered an era where new languages are almost never used. The Go and
Rust story are exceptions. This is the long thin tail of new language
adoption.

~~~
naasking
> We've entered an era where new languages are almost never used

I disagree. The same was said when perl dominated before Ruby and Python came
along. And Pascal, C and C++ before that. Nowadays Nim, Crystal, Rust, Go, F#,
D, Zig, JavaScript, Haskell, and more are all viable options for application
development.

We have more viable programming languages than ever.

~~~
jcelerier
I live in a moderately large urban area in france (~800k inhabitants) and for
your list ("Nim, Crystal, Rust, Go, F#, D, Zig, JavaScript, Haskell") I've
never seen any local job ad for any of those except JS and Go, and only saw a
bit of Haskell at the university.

From a quick glance over a few dozen pages of job ads, it's mostly Java & PHP,
with a bit of JS, Python and C# here and there and some C/C++ in embedded. Saw
2 node.js ads, as well as a COBOL and a Kotlin too.

So, yes, maybe they are viable. But.. used ? they're blimps in the radar next
to the big ones.

~~~
vorpalhex
In my city in the US:

\+ Node.js is plentiful

\+ Golang is up and coming

\+ Some elixir

\+ Haskell is pretty rare but it shows up as a secondary language

\+ Java and php are plentiful but these tend to be large, older corporate gigs

\+ Rust is rare

\+ No crystal/nim/f#/zig/D that I've seen

Obviously anecdotal and your case my differ.

------
akavi
This looks amazing, albeit waaay over my head.

The introduction says it "is made possible by translating the source language
into a dependent variant of Call-By-Push-Value.". What makes such a
translation impossible in the existing languages you mention
(Haskell/OCaml/etc)? Are there restrictions on expressivity not present in
those languages/augmentations to their type system needed?

~~~
zozbot234
Isn't CBPV more of a way of accounting for both strict (call-by-value) and
lazy (call-by-name) evaluation in the same programming language? Not sure how
that would help wrt. static memory management.

~~~
throwaway17_17
Call by Push Value does account for both CbV and CbN language semantics, but
the reason it can do so is by basing the language in a rather particular
categorical semantic.

Based on the linked intro, it would seem that the language is leveraging the
‘computational’ types that are an intrinsic part of the CBPV semantics to
force the ‘thunking’ of the dependent types. Effectively, all of the types
become ‘functions’ from the CBPV lense and those functions are linear by
construction (it is a categorical, as in category theory, feature of the
underlying semantics). Although not cited, it seems like the underlying type
theory takes notice and inspiration from Levy’s work on adjunction models for
CBPV.

I tried to find a way to make this comment that wasn’t too acedemic sounding,
but I think I missed the mark.

~~~
eointierney
You give excellent commentary

------
doersino
The introduction at the top of the Readme is great – it succinctly explains
what the project does, how it relates to existing languages, and why the
reader should care.

Way too many projects on GitHub and the likes don't do this well (or at all).

------
scott_s
Have you considered submitting this work to any computer science conferences?
PLDI is the obvious first candidate.

~~~
u2zv1wx
Possibly, but not sure. Currently my life is in a peculiar state (?) so I
think firstly I need to stabilize it somehow.

~~~
Syzygies
This is the most interesting language introduction I've seen in years. Work
like this gives life meaning. You have a larger purpose for stabilizing your
life; our hopes are with you.

------
NieDzejkob
If I understand correctly (and I'm not _sure_ I do), neut achieves its memory
management by not sharing data between structures, and instead copying it.
This works well when all data structures are immutable.

However, I feel like it would be more performent to just use reference
counting here. After all, incrementing a counter must be faster than a memcpy,
no? Since immutable values can't create cycles, no memory will be leaked.

~~~
throwaway17_17
I haven’t done a deep dive into the implementation, but based on the theory
employed, particularly the linear nature of CBPV’s computational types, the
copying would most likely be elided in all cases except for when a programmer
writes a function which explicitly copies data to a new term.

~~~
u2zv1wx
I can't believe my good fortune to have a wonderful reader like you, by the
way.

------
runeks
The concept of linearity is referenced 21 times in various sections, but not
in the introduction. As a first-time reader, I would appreciate if the
introduction were to mention the role of linearity in this language before I
encounter it in a subsection.

~~~
u2zv1wx
Thank you for your kind feedback. I'll try to come up with a concise way to
mention it in the introduction.

------
shpongled
Very cool, I'll dig into this later. I've been meaning to gain a better
understanding of dependent typing. Been working on an SML clone with first
class modules (ala 1ML/F-ing modules), and I understand that the module
language is typically modeled with dependent types. It's been challenging to
try to enable modules-as-existentials without too much compiler-side hackery
going on.

------
logicchains
Since this seems to support inductive types, would it be correct to say that
it's based on the calculus of inductive constructions (CIC), not the plain
CoC? Basing a dependently-typed language on pure CoC (plus some other features
weaker than inductive types; it's been proven impossible in CoC alone) is an
open research problem (see e.g. Cedille and Formality).

~~~
perthmad
What do you mean? CoC definitely supports impredicative encodings, and as far
as expressivity goes, this allows to implement a lot of programs. Proving them
correct is another matter, but that's not what you implied. Also, CIC is
notably not Turing-complete.

~~~
logicchains
I was referring to
[https://link.springer.com/chapter/10.1007/3-540-45413-6_16;](https://link.springer.com/chapter/10.1007/3-540-45413-6_16;)
induction is not derivable in pure CoC. I suppose yes technically you could
base a dependently-typed language on it if you didn't mind not supporting
induction, but I can't imagine many people wanting to use it, as it would be
quite limited compared to one supporting inductive proofs. Or at least when
people think of "dependently typed programming language", they're usually
thinking of something at least as expressive as CIC.

------
namelosw
Wow, this is mind-blowing. Great respect to the author.

May I ask which preliminary knowledge or directions I need to look at, in
order to have a decent understanding of the codebase?

------
aey
Super cool! Any idea how big the runtime is? Is it easy to run a libc free
version? I honestly feel like the embedded/firmware slice of the stack
desperately needs a next gen language.

~~~
u2zv1wx
What you need are the two functions `malloc` and `free` that have the
following signatures:

    
    
      declare i8* @malloc(i64)
    
      declare void @free(i8*)
    

So if you have implementations of them written in LLVM IR, I think that's
enough.

Disclaimer: I'm not very good at low-layer concepts. Correct me if I'm wrong
here.

~~~
aey
Is there a big runtime that’s linked to support all the language features?
Like what is the fully statically linked size of

int main(void) { return 0; }

If it’s just providing free/malloc symbols, that’s wonderful!

------
milkey_mouse
This reminds me a lot of Carp: [https://github.com/carp-
lang/Carp](https://github.com/carp-lang/Carp)

------
potiuper
TLDR dependent lambda calculus using linear type system memory management with
LLVM backend. Oddly, the source language a [dependent] lambda-calculus or a
[dependent] Cartesian closed category would seem more restrictive than the
linear types or closed monoidal category used to implement the compile time
memory management system.

~~~
EE84M3i
If you're not familiar with linear types, a fun paper to read is "Linear Types
Can Change The World!"[1] although I might just be partial to it because of
the wonderful name.

[1]:
[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.5...](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.5002)
by Philip Wadler

~~~
throwaway17_17
For anyone interested in Wadler’s publications on Linear Logic, his homepage
has a lot of pdf versions of his work in the area.

[1]: [https://homepages.inf.ed.ac.uk/wadler/topics/linear-
logic.ht...](https://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.html)

------
rehemiau
I didn't find an explaination of how it deals with variables (e.g. strings)
allocated on the stack, does/how borrowing work for those?

~~~
rehemiau
Ability to use the stack is a very important performance oriented feature.
Similarily, ability to nest structs or other data structures without
intermediate pointers. I hope this could be solvable!

------
rntksi
Interesting license choice.

I've found out through reading about it that my country is not party to the
famous Berne Convention.

------
devit
It's kind of hard to decode the explanation given that it spends a lot of text
on useless formalism instead of substance, but it seems like this language has
three fatal problems:

1\. Borrowed pointers are not a first class concept, but just syntax sugar
over returning a parameter from a function, i.e. &T or &mut T are not actual
types in Rust parlance

2\. There is no mention on how to achieve safe shared mutable data or even
just shared read-only data, i.e. the equivalent of Rust's Arc<Mutex<T>> or
Arc<T>, which probably means the language has no support

3\. It seems there is no way for a struct/record/tuple to contain another non-
primitive data type without the latter being allocated on the heap

So as far as I can tell aside from the dependent types this language is much
less powerful than Rust and cannot fully utilize the CPU, and hence far from
the goal of having a perfect Rust+dependent types language.

~~~
voxl
Just because you're not capable of interpreting the formalism doesn't make it
useless or not substantive. This readme wasn't written for you it was written
for a type theorist.

~~~
zozbot234
Eh, I'm not sure about that. A type theorist would expect to see a clear
description of what translations are performed as part of compiling this
language, and a rigorous argument that this helps solve a real issue, e.g.
wrt. memory management. It's hard to see either in the linked readme - it
reads like a description of some promising, rough experiment, but not quite
fully worked out in a way that would make it clearly understandable to
uninvolved folks. I'm not saying that there's anything wrong with that, and
it's definitely on par with many Show HN's. Just trying to call for some
perspective.

