Hacker News new | past | comments | ask | show | jobs | submit login
The Vale Programming Language (vale.dev)
219 points by danny00 on Nov 20, 2020 | hide | past | favorite | 171 comments



It's not called out on the web page but the novel concept here is the usage of regions for memory safety. Microsoft's Verona[0] is also taking this approach. People seem to really like Rust but it's important to keep experimenting and see if we can get the same benefits but with better ergonomics.

0 - https://github.com/microsoft/verona/blob/master/docs/explore...


From my reading of the docs, Vale is not memory safe. It offers some tools to assist with memory management, but is fundamentally closer to the safety level of C++ rather than Rust.


Not being fully memory safe but assisting in it, is the way to go for the broad scope of applications imo;


I don't really agree. The vast majority of applications are fine being written in whatever flavor of high-level, garbage collected, memory safe language you want. There are many nice options with pretty much any set of feature and style choices you could want.

The rest of these applications have either hard realtime or hard performance requirements or both. What I don't see in these apps, is for memory safety issues to be 'ok'.* It's 2020, using a memory safe language should be table stakes for new projects at this point.

----------

* The one exception I can think of is single player video games where the worst that can happen is basically just a crash to desktop. Multiplayer games are a different story especially with the trend toward paid DLC where these kinds of issues could result in anything from exploits in multiplayer matches (hurting the community and thus the game overall) or players circumventing DLC restrictions (resulting in a loss of profit).


I'm not entirely sure what's supposed to be "novel" about region-based memory management. It was featured already in Cyclone, a C-like memory-safe language that was a heavy inspiration for Rust. Rust ultimately went for borrowck and lifetimes as opposed to general regions because of ergonomics. Regardless, it's always interesting and worthwhile to see alternate approaches being revisited - for instance, explicit regions might help with some scenarios that currently require unsafe{} in Rust.


FWIW cyclone also had unique pointers.

I don't know if explicit regions help very much with safety, but you can certainly go places with a more expressive type language. E.G. ats - http://www.ats-lang.org/


My theory is that in order to replace C/C++ in the systems language space we are going to need several different types of languages.

Rust is pretty nice, but the options for allocation and it's ownership type system make it a mismatch for certain types of applications (probably games and OS's but we'll see how that plays out).

I think regions are neat. And I'm looking for an opportunity both for playing around with it myself and for the programming language community at large getting more exposure to them.

More ideas exposed to more people leads to better ideas of how to fit everything together and how to leverage what features to solve some problems better.


A modern alternative to C's FFI needs to be developed, perhaps a standard like gRPC but as a compiler extension. C's FFI contains so little information due to its weak type system that it is wholly unsuitable for higher level languages. Yet we use it because it is the lowest common denominator. The C FFI has cost hundreds of hours of lost productivity due to having to re-annotate all the types in the non-C caller. This is before we even take into account the mess that is function pointers and types generated using the pre-processor. And no, tools like bindgen and c2rust are stop gaps, not a real solution. A lot of low level code cannot be effectively called safely in a higher level language at all without hundreds of hours of manual work wrapping and annotating each individual function. We need a new standardized, zero cost, cross language interop format that functions on the ABI/memory level. Rust's Cxx is a good start but it doesn't scale m:n with regards to new languages. Every new language has to implement a custom wrapper for calling another high level language if they do not want to lose 80% of the benefits of the respective language's type system when passing through the primitive C FFI. For example, calling Rust from Julia, calling Julia from V lang. The lack of a common interop format is hurting the developer ecosystem as more proliferation of programming languages simply leads to greater fragmentation. The web solved this a long time ago with JSON APIs, generated OpenAPI clients (and now Graphql), and to a lesser extent, gRPC. We need to do better when it comes to systems programming.


The whole point of FFI is that it's a lowest common denominator - it's a foreign function interface, meant for interop between two potentially wildly different runtimes, up to and including assembly language. That means it can't rely on quirks like language-specific type information or bounds checking. The only thing that all runtimes on a machine are guaranteed to have in common is that they use the same set of memory and CPU registers, so that is all that FFI can use to define subroutine interfaces. Hence terms like 'ABI', 'calling convention' and 'memory layout'.

The good news is that whatever issue you have can probably be solved without resorting to FFI. There are plenty of other interop options - for example, if you control the library, you can use your operating system's IPC mechanisms with an agreed-upon serialization format. Or, if your library and application are within the same ecosystem, you can use language-specific library management features such as Python's import or Rust's crates. FFI is a bit like C itself - if you're reaching for it as a solution to your problem, then your finger is already on the trigger of the footgun.


I think there's a conflict between FFIs in practice and theoretical FFIs. The few papers I've read for sound FFIs between languages A and B rely on the fact that the common interop target is a high-level language whose features subsumes the features of A and B. ABI, calling convention, etc. are merely implementation details.

So from this perspective, the interop target should actually not be the lowest common denominator, but the opposite: a target that can describe all possible user languages in a common format.

In fact, I would actually argue that the implementation details are NOT that interesting and only a distraction: that we need an abstract way of declaring compatibility. For instance, one can imagine a high-level FFI target where function types are parameterized by their calling convention i.e.:

`foo: (VOID<C>->VOID<C>)<C>`

`bar: (UNIT<Rust> -> UNIT<RUST>)<RUST>`

and attempting to call `bar` from `foo` involves a compiler intrinsic like `RUST_CALL(bar)()` to convert the type (and the calling convention) appropriately.


IPC is, I think, the concept that has been neglected and led to the desire for some kind of really intelligent FFI.

If you have code in Ruby and in Julia that you want to mix together it's probably because both pieces of code leverage the distinct capabilities of those two languages, which have nice capabilities because they accepted very tradeoffs.

Would you really want Julia with Ruby's Number class? It would entirely kill performance. You want it to be an integer maybe, or a double, or sometimes something else. It's going to be application specific is the point, because another user want some different subset of Number's behavior. I don't think there's a shortcut here besides making a very limited or opinionated FFI system.


Not necessarily. Sometimes I want to use FFI because the "hard work" (e.g. implementing a complicated protocol) was done in language A, and I want to use language B in my project, but I don't want to redo all that hard work.


And most people used to believe that ergonomic, zero cost memory safety was impossible too. And yet it moves. There is no reason why we have to settle for a poor FFI solution just because it was the choice of our predecessors.


Memory safety and FFI operate on two different layers of abstraction entirely. What does it mean to enforce 'memory safety' across the boundaries of two different applications/libraries that may have entirely different ways of keeping track of memory allocations and deallocations? Again, FFI has to be agnostic to such implementation details in order to work, and it does so by forcing you to define interfaces in terms of the lowest possible level of abstraction common to all applications. That means explicitly specifying expected memory layout and calling convention on both the application and library sides, i.e. an ABI.


>We need a new standardized, zero cost, cross language interop format functions on the ABI/memory level

I don't think that's doable unless the underlying language already has the same ownership and lifetime guarantees as Rust does, otherwise these bindings have to protect against memory issues by effectively taking ownership of the data to garantee its lifetime, which is not zero cost.

I simply don't think that you can auto-generate zero-cost safe C or C++ bindings automatically.


> I simply don't think that you can auto-generate zero-cost safe C or C++ bindings automatically.

As stated, that's likely an accurate statement, but if you separate the requirements for the bindings the problem looks way more tractable:

- auto-generate: I'm pretty positive that this point is only hard given the following, and I feel you would agree with that assessment, so I'll just focus on the other ones.

- zero-cost: doing zero-cost ffi is the best option, but a lot of advanced constructs that people want to use cannot be well represented in that way because they rely on the internal consistency of the host language. That means that dynamically loading a library written in Rust might make it so that you can only interact with ADTs, trait objects and whitelisted provided traits on those ADTs, potentially only through trait objects. That's not zero-cost, but it is super powerful. ObjectiveC has lived its entire existence going through vtables and that has even enabled runtime introspection that was super useful.

- safe: safety can be accomplished easily enough if you don't constrain yourself to zero-cost or if you put constraints on what is supported for zero-cost ffi. You could say that method calls on trait objects is safe, but type parameters are out of scope because ffi and monomorphization are not compatible. You could restrict yourself to either no borrows crossing ffi, or no mutable data (only internal mutability through trait methods) allowed.

- C or C++: I would focus on other languages, personally. Having easy interop with Java, C#, Python and JavaScript would open opportunities that do not have anything to do with systems programming per-se, but that would make it more likely that the next numpy isn't written in pure C.

- automatically: the need for extra annotations in either end of the interop simplifies part of the problem and might even help with documentation. Adding some light ownership information to APIs in non-Rust languages would be great for non-Rust users of those APIs.


Hey! Surely there's some Fortran left in some of the libraries that back NUMPY.


This is a very real problem, particularly the M:N issue.

But I think it's inevitable you have to do things on both sides of the language boundary, i.e. pushing them towards a lowest common denominator. (e.g. Microsoft's COM, which actually worked pretty well)

Really what it pushes you toward is wire protocols and NOT relying on the type system. That is the C and C++ ABIs are related to but different than the APIs (type system).

My experience tells me that transparent interop is kind of a pipe dream. The problems always pop up somewhere. By "transparent", I meant "Rust function calls C++ function" and "C++ function calls Rust function" without other metadata/bindings. The codegen becomes a big problem in practice.

Fundamentally a lot of languages LOOK the same but they ACT completely differently. A Rust function is not a C++ function is not a Python function is not a Go function. (And funny thing -- as of C++ 11, C++ now has many different notions of "function", because of move semantics).

And functions are actually "easy" compared to types (e.g. inheritance vs. typeclasses vs. interfaces)

----

Basically I would say the problem is that you either spend time manually wrapping or annotating your code (as you say), OR you put an ever growing list of heuristics in the code generator, a la SWIG.

Those heuristics have bugs, and will make your program unreliable. They're also "someone else's problem", which leads app developers to come up with horrific workarounds.

So I go with the simple manual wrapping, and reducing the number of things to wrap by changing the structure of your program. This also has other benefits like efficiency, i.e. crossing the language boundary less often.

(Related: I think better build systems can go along way toward solving this problem. Unfortuately there seem to be a lot of language-specific build systems and package managers now, which only exacerbates the interop problem. If you have one language-neutral build system, it's not that bad.)


I'll be presenting a video on D's prototype ownership/borrowing system tomorrow at 14:00 UTC.

https://dconf.org/2020/online/index.html#schedule

https://dlang.org/blog/2020/11/20/dconf-online-2020-how-to-p...

It's free to participate!


Games often use arena-based allocation systems, slotmaps etc to manage myriads of instances. I don't see how any of that is a problem in Rust?


It's mostly the ergonomics of Rust being fine-grained that present a problem. While it's gotten better with revision, fundamentally, you start in an assume-nothing mode in Rust and then boilerplate appears to say "actually, this one is mutable". This incurs friction on nearly every line of state manipulation and when you're doing game-style allocation, you mostly don't care. You have a tremendous amount of mutable state; some of it you blow away every frame, the rest you may want to lock sometimes.

What Vale does that is attractive is to make simplifying assumptions that address this problem domain closer to the level of granularity that it deserves.


I think a pure ECS has its state manipulation well-encapsulated and organizable. Part of me thinks that sort of design would mesh pretty well with Rust’s ownership system.


Rust folks think this as well; there has been a lot of activity in this area, and even spread some of the ideas behind ECSes to areas outside of games.

https://kyren.github.io/2018/09/14/rustconf-talk.html


It is not. The usual arguments against Rust for games are more cultural. Rust makes a tradeoff of being a bit harder up front, for less time debugging later. Many people say that games specifically would prefer the opposite tradeoff.

We'll see as things continue to develop here.


Honestly most of the iterative parts of gamedev I worked on was either already up in the tooling/editor or in a scripting language.

I think Rust for core engine and Lua for game logic would be a pretty solid combo.


I’m drawn to regions because they seem to intuitively match how I write my high performance systems software. In those sorts of SW, I really need precise and cheap ownership rules, but without any restrictions in topology.

It’d be super if regions were a mathematically robust way of describing that engineering solution.


I don't think I understand the difference between regions and Rust's concept of lifetimes. In my mind they were two sides of the same coin, but you seem to argue otherwise.

Note that my knowledge of regions comes mainly from Cyclone, not Vale or Verona, are those different concepts with the same name?


> People seem to really like Rust but it's important to keep experimenting and see if we can get the same benefits but with better ergonomics.

Is my intuition correct that one could implement this type of memory handling in Rust as well?


So you’re saying that code you wouldn’t be able to write with rust would crash at runtime in Vale right?


Rust borrowck is conservative, so not quite. There's plenty of memory-safe programs that can't be written (or require runtime checking) in Safe Rust.


that's not my question though


The adjective modern is so popular, yet it's so devoid of meaning. What does having a modern syntax even mean? The impression I get when I read projects described like that is that the authors are saying they are following the latest trends they could find, but they don't quite know why.


I think it's effectively code for "it's not like C". Easy iteration, lambdas, generics, patterns, type inference etc...

None of these features are technically novel but they do tend to be shared by newer system languages, as opposed to C/Pascal/Fortran/Cobol and friends. Then of course there's C++ that's both old-school and modern because C++ is slowly but surely evolving to become the programming language equivalent of the Borg and eventually all other programming languages are bound to become a subset of C++.


"... eventually all other programming languages are bound to become a subset of C++."

God help us all!!!!

(fortunately for my sanity, I have zero confidence that this will be the case)


Every language will eventually become a subset of C++, and they will all compile to javascript.


The modern ones will compile to webassembly.


Via JavaScript, on the client.


Same with "readable syntax". Nobody sets out to design something deliberately unreadable. It usually means that it conforms to the author's personal flavours and there's nothing wrong with that but the implied "everything else is unreadable" subtweet doesn't really help anything. Case in point:

    each planets() (planet){
        println("Hello " + planet + "!");
    }
I bet that the double () () here leads to some pretty unreadable code outside of this happy path. But that's just my opinion, not some objective truth


Not to mention keywords such as ret, inl, imm, mut, mat and even yon - 'readable' is certainly subjective. Not to dissuade anybody from trying anything outside the norm but I wonder if calling it readable is accurate.


"blazing fast", "lightweight", etc...

Buzzwords are abound in modern technology circles anymore.

To your point; "modern" is especially insidious because it implies that older technology is bad. I'd take vanilla PHP and JavaScript over TypeScript and the frameworks any day. I recognize that's an extremely unpopular opinion but those languages are battle tested over 25 years and if I run into a problem that I haven't seen before, there's a high probability I'll get the correct answer from a search. For the "modern" stuff, I get five different answers and none of them solve my problem because the language and/or framework has changed as many times since the question was first asked.

That said; given the choice I'd choose Clojure and ClojureScript. It's a young pup compared to PHP and JS but it's still based on a 60-year old language and just a joy to work with.


As a lisper, I find “modern syntax” means “looks vaguely like C or JavaScript”


I think it's more "borrows many ML concepts but uses C/C++ style syntax to look more mainstream, also no GC".


"My advanced, fast, high-performance, and powerful language/system/framework is so efficient, flexible, concise, usable, readable, extensible, structured, and simple."

In general if I read any of those descriptors without the associated comparative object, I categorize them as advertising weasel word for whatever X is being described as having them.

Those words are comparison words. They are meaningless without either benchmarks, or direct comparisons with other items in the IT landscape that have well known quantities for X.

So plop modern on that pile as well. It's even more of a loaded term, because "modern" implies a range of component interpretations, from trendiness to actual advancements in the "state of the art" (such as UTF-8 vs ASCII).


"following the latest trends" basically works as a definition for the word "modern" in the context of programming languages. "involving recent techniques" etc. why modern? is a fair question I guess but why not? are there any good arguments against things like type inference, generics and lambdas?


I see the first paragraph of a language's website as sort of the "elevator pitch" for that language. You probably want to cut down any vague and uneeded words in that context and focus in the defining aspects of your language. Following the latest trends is also vague and, the more precisely you define it, the more you will need a timestamp next to the term. I think it's better to jsut spell out what are the actual features you really care about.


Are any of those recent techniques?


as far as programming language evolution goes I have to assume "recent" refers to a span of 20 years or so - otherwise I don't think we have any recent techniques


I’m fairly certain we’ve had lambdas (lisp) for something like 40 years. Type inference (ML/lisp compilers) for 30-40 years. And generics, at least in the sense of parametric polymorphism, (Ada/C++/Java/ML) for about the same amount of time. As far as I can tell, the borrow checker and some features around dependent types are newish, but actually modern features are few and far between.


Lisp had lambdas for 60 years, I think ML has had type inference for about 50 (maybe there are earlier languages that had it?), ML also had generics for that long. So, none of your examples were recent developments by your standards.

Even in C-like languages, generics were part of C++ (or rather templates, in that case) since the beginning, I think (that's 35 years ago). I think you can make a case for type inference reaching the C language family recently, though.


I mean sure "recent" could refer to a span of even 50 years why not. I wouldn't trade my Roslyn compiler for AGOL68.


Comparing a compiler to a language is comparing apples and oranges.


I disagree I think it's more like comparing an apple to an apple tree


Either way, it’s hard to compare a 30 year old language to a modern compiler because the modern compiler has benefited from 30 years of optimizations: you can’t tell how the old language would perform with a similar investment.


The adjective has a lot of meanings, nothing I'd call a void.

> More common, especially in the West, are those who see it as a socially progressive trend of thought that affirms the power of human beings to create, improve and reshape their environment with the aid of practical experimentation, scientific knowledge, or technology. From this perspective, modernism encouraged the re-examination of every aspect of existence, from commerce to philosophy, with the goal of finding that which was 'holding back' progress, and replacing it with new ways of reaching the same end.

https://en.wikipedia.org/wiki/Modernism

Seems applicable to "modern, readable syntax". It means not afraid to buck what people are used to if there's a better way to do it.


Though the word does have a dictionary definition, it's just sort of empty. That's what I was criticising. Both adjectives, modern and readable, are subjective and not really meaningful in a technical context. Unless you define precisely what modern and readable are, they are just vacuous words in an otherwise technical write up.


modern usually seems to mean means algol syntax and not having macros :)

aka the status quo of 60 years ago


turn it up to 11 with "truly modern" or similar.


Even though I think the market for programming languages is saturated and it will be for the next decade or more, I think it's always nice when someone tinkers and creates something new, so, my congrats to the authors!


The market for programming languages has been saturated at least since I got into programming. Which was not all that recently.

The market for music has been saturated for even longer. But nobody really complains when someone wants to start a band, because we recognize that creative work has intrinsic value, even when it produces little economic return, even if only by enriching the life of its producer. Experimenting with programming languages is only very superficially different from that.


No reason to complain until people start bugging you to go see their band or use their programming language.


I think this is a weird way of looking at it.

PL are evolving continuously on some level or another. New languages pop up with new concepts or ways to do things, old languages can then incorporate some of these things.

It works with all things similarly, there is a conventional main-stream that most adhere to, for efficiency gains but the creative risk takers are the ones who drive things forward, or fail to do so.


Yes, I mostly agree with you, but I think that if we look closely to how programming languages have evolved over the years, we can more or less define them into "generations" i.e. periods where certain concepts have more or less dominated the interest of the community and have dictated how software and languages in general are designed.

These are usually linked with one or several languages springing out and becoming popular, either by embracing new paradigms or by strongly rejecting a mainstream concept whose usefulness have been cast into doubt either by the industry or the community.

Outside these periods, the overall interest tends to stagnate while the newer languages become stable and gain mainstream acceptance. In such a situation, a new contender must either find itself a niche, by covering very specific needs in very effective ways, or it must compete in mindshare in order to justify itself among the newest big names (i.e. any new compiled, strong typed and memory safe language nowadays is almost forced to compare with Rust and I dare say Zig).

That's what I meant with "market saturation": I think the stagnant landscape of the 00s, dominated by classic, OOP languages, gave birth to a huge wave of "safe" languages in the 10s, many of which share similar concepts and ideas; I think we're now in a moment where some of these have made a name for themselves and have been established as mainstream, serious solutions (see Rust or Go), requiring newer contendents to be really innovative in order to capture mindshare and not be considered "me too" languages.

These are, obviously, just my two cents; I really love compiler development and language design, and I think we should always support new languages and the work people do towards the goal of designing better languages and improving existing ones.


There is plenty of idea space to explore in programming languages.

ColorForth, for example... lets you do some of the work a parser would do, at edit time, by adding additional attributes to the source.

Verilog, Ladder Logic, and Register Transfer Language all allow the programmer to specify a flow of logic and expressions that is always running, instead of a flow of instructions that we're all used to. Mixing this paradigm with the traditional code we're used to offers powerful possibilities.

Transforming spreadsheets to a set of statements that are always executed, and vice versa could be fruitful as well.


It's important not to think of it as a market. It is pretty valuable (and not in terms of dollars right now) to create things like this.


Looks like inferred return type on the `planets` function on the first page?

I'm not a big fan of this. IMHO local inference is great, but it makes things so much better when function signatures are explicit.


D would be ruined without it. The ability to return so-called voldemort types is not only hugely beneficial for expressiveness but also efficiency (i.e. particularly avoiding the heap).

Syntax toa apply some kind of constraint to the return type would be ok.


When I looked up the example for D, it looks like something that could be named with e.g. Rust's impl feature. In Rust you can return an "impl" of an interface, which is an object with concrete but unnamed type that implements a known interface. It similarly avoids the heap and avoids virtual function calls, without exposing the concrete type.

https://doc.rust-lang.org/edition-guide/rust-2018/trait-syst...


If you have a publicly visible interface, that's a name. It may be a name of a base type but it's still a name. IIRC, you can't even typeof() an instance of a Voldemort type.

That has some implications for how Voldemort types work that can't be replicated with anonymous interface implementations. The main one is, Voldemort types are very thoroughly sealed. There is no way to create an instance of one outside of the function where it's defined. With an interface, anyone else could come along and create their own implementation.


You may be thinking of Java's idea of an interface, which is a type (with specific subtyping behaviors). Traits in Rust are not types. There are two different ways to reference an object which implements a trait indirectly (actually more, but these are the dominant ones):

- a `Box<dyn Trait>` is like a Java object referenced by an interface. Everything uses dynamic dispatch, and it is an actual concrete type (though the actual "underlying" type is type-erased).

- an `-> impl Trait` is an existential type which uses a trait as a bound, which should be equivalent to D's "Voldemort" types, except that it can still satisfy Trait requires for other functions. For example, if you `-> impl Iterator<Item = u32>`, you can pass that result to a function expecting an iterator of `u32`. However, the type is fully defined by the callee and can't be instantiated/inspected externally.


> With an interface, anyone else could come along and create their own implementation.

Not true in Rust, because the type returned by the function is a concrete type.

I think we may be overloading “name” here, a bit. If you return an “impl” in Rust, you have to name an interface, but you can’t actually create new instances of the concrete type returned by the function.


> Rust, you have to name an interface, but you can’t actually create new instances of the concrete type returned by the function.

I think that's the distinction that really matters. The thing that's neat about Voldemort types, and gives them their interesting properties that are distinct from anonymous types, is that they have no name at all. Not even an interface or trait name.


What would be a concrete use case which benefits from this? I am not experienced with D, but it just seems like it would make code more obscure


There's a bit more explanation on the D wiki: https://wiki.dlang.org/Voldemort_types

I don't really intend to say whether they are better or worse than other ways of accomplishing similar things. Just that they're different.


Yes, from that explanation it sounds like they are equivalent to Rust’s impl return types.

Note that interfaces in Rust are not types, they do not name types. A value cannot have interface type. What happens when you return an “impl” type is that you return some unspecified type, but that type must implement the specified interface.

You could translate the D example to:

    trait HasGetValue {
        fn get_value(&self) -> i32;
    }
    fn create_voldemort_type(value: i32) -> impl HasGetValue {
        ...
    }
The “create_voldemort_type” function simply returns a value of unspecified type. As far as I can tell, this is equivalent to the D code, except it doesn’t use type inference for the function type.


Not really. Part of the point of voldemort types is that no information about the type is 'leaked' outside the function. In this case, the trait 'HasGetValue' is leaked and can be used by other code.


When would you want to have an interface such that nothing about the return type of a function is known to the outside world? I'm genuinely asking; I just don't understand the utility of this concept.


I don't think anyone's trying to argue about whether it's better or worse than the Rust concept. Just that it's subtly different.


I'm not interested in comparing it to the rust concept. I'm just curious why this is something people find valuable in D, and why another commenter would call it a feature D "would be ruined without"


The commentor is just mistaken. The two do the exact same thing.


That's fine, but I'm not interested in comparing them and I still don't understand what the benefit is


I sort of agree, but enforcing this only on things public / exported can be a nice compromise.

(I mean this as a general statement. Idk how the language handles this)


I actually like the option to have both inferred parameter and return types, primarily because they help in writing code while prototyping quickly with several types.

Once the code is well defined I then start adding the type annotations , often times using the ide tooling to auto generate them.

I think a good balance would be that all published packages should have type annotations added in their source code for clarity while short scripts could do without them.


Idk I mean maybe it's a matter of taste, but I just never felt like it was that cumbersome to write type names in the function signatures. In a short script it's not like you would usually have the dozens or hundreds of function signatures it would take to make this feel like drudgery.


Well think about writing functions inside a repl or interactive environments like jupyter notebooks or just plain scripting. Enforcing type annotations add little value in those cases and end up making the language ill suited for such use cases.

Whole form type inference is actually a super power that allows for writing extremely expressive code without giving up type safety.


I like explicit types on functions too.

However, it might be what I’m used to.

Do you think this would be less of a problem with editor/IDE support? Using colors, or other visuals, to infer and show the types while editing might overcome this desire.


Rust did not choose to infer types in signatures for UX and stability reasons. If you infer types in signatures, then a change in the body of the function can change the type of the function, which makes dealing with backwards compatibility harder. Also, it leads to weird "errors at a distance."


I call this property "an accidental complexity" - when changing one piece of code causes failures in other, seemingly unrelated parts of the system. The property causes fear of introducing changes, and is usually a sign of tight coupling.


I tend to agree with the philosophy that a language should not require an IDE to be productive in.

With function signatures, I think it's particularly relevant for documentation as well. I should be able to look at a function signature in generated docs and know everything about what goes into and comes out of that function. For me the signature should be a complete contract, and changing code within a function body should not be able to change the signature.


Looking at the language ref, you can specify the return types if you want to.


This is not that helpful if you are working with someone else's codebase


Worst part of Ruby for me, I hate inferred returns in general.


I think you may be thinking of implicit returns, where the `return` keyword is not necessary. An inferred return is a separate feature where the return type is known by the compiler, but not stated explicitly by the programmer. (Note that the code in question uses both.)

Ruby, of course, technically has both as well. However, inferring a return type in a dynamically-typed language is probably less noteworthy ;)


I was going to pop in to say that. also going to add that I hate implicit things like this. messes with my brain, how hard is it to just add the return keyword? saving a few keystrokes hardly seems worth the increased cognitive cost to others.


In most languages that have implicit returns (that I’m aware of), the languages are also expression-oriented. For example, in Rust, if/else is actually an expression and you can use the result:

    let result = if foo > 5 {
        "Big foo"
    } else {
        "Small foo"
    };
Implicit returns are just an extension of this (it’s really just “semicolons create statements; if you don’t have a semicolon, that’s the final value of the block”). The explicit return is an actual statement that returns early.

IME the holistic design works pretty well, and I think you can glue together expressions much more naturally this way. The implicit return on itself would be much more annoying IMO.


“Implicit return” is basically a misnomer; there’s no language that I’m aware of that implicitly returns early. They’re all expression oriented and act like this.


Not to be confused with Vala: https://wiki.gnome.org/Projects/Vala


Wow, that's an extremely poor choice of name for a programming language.

I was 100% sure this was the gnome thing.


In portuguese 'vala' is the hole you dig on the ground or a naturally occuring one. Which can also be used as a synonym to "cova" which translates to "grave". I bet it will be hard to make a community for it here in Brazil or Portugal.


People overestimate the impact of brand names having weird meanings in foreign languages.

For example, OSRAM unambiguously means "I will shit" in Polish, and despite that, it's one of the most popular lightbulb brands in Poland.


too late


I wonder what made the authors of the language make the pattern matching statement keyword `mat`. Like... they were 2 characters away from a perfectly readable name, yet they've settled up on something that's confusing (took me a while to grasp it). Guess the time and space savings are worth it, though.


inl, imm, mut, mat ... I see a pattern.


When people build these type of languages it'd be nice if they prominently insisted on the new approach / change the language creates. Would be clearer.


Nim tried regions based memory management and ended up going a different direction. The feature still exists, but the core devs ended up considering it a bit of a dead end: https://forum.nim-lang.org/t/6930


Looks like a language heavily influenced by Rust! I'm interested to see if we see more drift in this direction. Rust's by no means conquered the world (yet), but things keep moving in that direction, I'm curious if we'll see looser or otherwise quick-n-dirty (https://users.rust-lang.org/t/prototyping-in-rust-versus-oth...) Rust-like languages


Sounds good, but the name may confuses it with Vala https://wiki.gnome.org/Projects/Vala


Language looks Rusty :)


Well this is it, isn’t it. Because really, what the hell is its USP†? The world already has a perfectly cromulent Rust language. Adding a second, less popular one merely dilutes both efforts; and to what purpose? Ego? Control? Simple unwillingness to accept a direct competitor has already beaten their ass to market and mindshare?

Everyone knows the name Charles Darwin. Now ask them who Alf Wallace is, and why they should they should care. Or Yohan Blake. Or Atty Osborne… and so on.

Where’s the advantage in being an also-ran? It’s a tough but necessary lesson to swallow: accept your lovely technology is just too late to be the successful product you hoped for. Treat the whole exercise as a valuable learning experience, and move on to your next Big Idea as quickly as you can, this time, of course, making sure to get to market before your direct competitors can, so as to leave them in the dust instead of you.

--

† Their website doesn’t say. Just a technical features list, and yep, one look: Rust, but a lot less mature with waaay less ecosystem and support. That’s not a winning proposition; that’s a sucker’s deal. Damn but every computer geek needs to do a marketing course.


I had to do some digging, but I found this:

> A memory model and first-class regions which together make it as fast and safe as Rust, without the borrow checker and its aliasing restrictions.

(https://www.reddit.com/r/vale/comments/i95ue8/use_case_of_th...)


“I had to do some digging”

Indeed. I think it’s telling that the opening question on that Reddit thread identified the exact same problem I did, and three months on they still haven’t fixed it. Programmers who fail to provide their users’ needs simply because it doesn’t interest them to do so is a cancer throughout this industry and culture. As a 30-year computer user myself, that is an attitude I expect from children, not from supposed “professionals”.

I don’t care how nice your language is, I care about whether using it is going to make my life better… or worse.

If folks here think I’m bad-tempered and unforgiving, then consider I’ve had to spend the last 20 years teaching myself to program just so I’d no longer be at the complete mercy of those “real programmers” who one too many times made my life shit.

In turn, I’ve done my share of making users lives both better and worse, and while I accept that shit happens I don’t tolerate me messing other people around any better than others messing me.

It’s meant to be called “Computer Science” for a reason. And you should see the size of the knives that real scientists bring to a science fight. I’m being kind.


One last comment:

A good parent fights tooth and claw for their child, and gives it every opportunity they possibly can to enable it to succeed. (And isn’t afraid to show tough love either when that is what is needed.)

Sitting with your thumb up your ass expecting success will come to it automatically just because it’s deserving is the mistake I made. And that was no minor me-too indulgence; it was a once-in-a-lifetime opportunity to put end-user automation into the hands of millions. And all that investment and future potential crashed and burned at the very last step to mass adoption, because while I was a good programmer I was a terrible product manager.

Any idiot can write code. That’s the easy part. It’s all the other stuff you need to work hardest at if you’re to build a successful market. ’Cos if you fail to put bums on seats, the best code in the world ain’t worth squat.


> Simple unwillingness to accept a direct competitor has already beaten their ass to market and mindshare?

I have not seen any shred of competition suggestion here. But even if there is competition, that should be encouraged - competition is healthy, and it promotes development. Who is to say that Rust has all the good ideas and nobody else can have any good idea ?

You should encourage competition not stiffle it "because somebody else is more advanced"


You're showing a lot of attachment here. Rust is not your precious child in need of defending by attacking upstarts. There are many tools of this kind and there will be many more to come.


I’m not a Rust user. I have no particular nag in this race. What I do understand, or at least am start to learn, is the difference between a Technology and a Product, and between a Product and Success.

So once again: What is Vale’s USP†? Because it isn’t on their frontpage; which it would be had they thought to ask that question of themselves.

--

TL;DR: Goddamn it, all you noobs, but learn How To Sell already. Some of us are just too damned old and tired to want to hold your diapers till you learn to grow up. Try making our lives a bit easier for once; not just to benefit us but your own products as well.

.

(† And if you don’t even know what “USP” means, well there you are. As someone who has already tried to bring truly groundbreaking new tech to market and flubbed it, and is just about to roll up sleeves and try, try again, I’m not asking these questions just to be obtuse but to be helpful. However, if you’d rather just insult than ever amount to squat then by all means carry on.)


I don't know what Vale's "USP" is, and I think you're right to point out that it doesn't really sell itself on its front page. But you're being a massive dick about the marketing of a language which is clearly a work-in-progress, and hasn't even hit v0.1. You know you can give advice and be nice about it, right?


Why? Will the market be any nicer to them? They’ve failed to make their case as to why anyone should care that Vale exists; never mind actually differentiate it from the current swathe of robust, established Rust-style languages already in full production use and battling each other for the market’s next 20 years of attention. And they’ve barely reached v0.1, and expect to make a difference when make their grande entrance into that bullpen? Who’s fooling who here?

Look, if they just want to be another D then by all means have at it. It’s great that they have a personal hobby, but at least have the good grace to put up a notice saying they’re making this thing to please nobody but themselves. That way anyone else looking at it knows not to invest their own time into a toy project that doesn’t even take itself seriously, never mind have the chops to make the rest of the world believe in it too.

Oh, and by the way, I’ve said nothing about Vale that I’ve not said of my own projects… right before I’ve pulled the plug on them for failing to hit their overall objectives. And the best of that work’s been technically excellent, with sitting users royally pissed that I’ve just chucked their investments on the scrapheap along with my own. But I’m a realist; and it’s better to pull the trigger now and move on ASAP to the next thing, than drag out a slow but inevitable death and then have to junk an even larger investment further down the line. See also: sunk cost fallacy.

You want to ask other people to believe and invest in you? You’d damn well better bring more than just a tick list of features and your delicate feels. Else you’re just messing them around for your own personal ego.

/fin

--

TL;DR: When someone offers you difficult questions and brutal honesy, take it and ask for more, ’cos that’s the best gift they can offer. But if all you want is a pat on the head, then go ask your mom as I’m sure she thinks everything you do is wonderful.


I think you are severely overestimating the negative repercussions of "too many pet projects" existing in the world. If anyone gets left in the cold because they depended on a project that a maintainer gave up on, then that's a valuable lesson that needs to be learned early in one's career.

I argue that, for most small projects, the net educational value of bringing an idea to fruition in public is far greater than any negative externality that such a project imposes. Imagine a world where every side project by a novice engineer is ruthlessly speared to the point of abandonment. I'm sure your counter-argument is that "trial by fire" is the most effective means of growth. You probably believe that forcing people to give up leads them to consider bigger and better things. This is, by far, not the case. Most people give up, and never come back. You are focusing far too much on the immediate value of the project, and completely dismissing its value as a means of creating a motivated, learned individual who can potentially make huge contributions in the future.

Anyway, if your most salient point is "the author should be more explicit about a support plan", then I don't disagree. But man, you could have presented it in a much more accessible and succinct way. If you really care about people taking your advice to heart, you have to be more kind.


Upvoted because I'm pretty sure this comment was written by a neural network. Commented because I'm hoping for a reply.


All comments are written by neural networks,in the end.


Negative, I am a meat popsicle.


Is it interpreted or compiled, AOT or JIT, native or byte code? The landing page doesn't say, but I'd like to know such details and a host of other information that isn't mentioned either. All this doesn't motivate me to dive deeper.


There are several domains of programming with similar but not-quite-the-same requirements

- OS

- Embedded

- Game

In some respects, these are pretty niche and there is a lot of value imo in having a single-language that spans them rather than creating an even more niche language for each one. I saw C++ and now Rust filling that role.

I've not dug into the implementation details enough to confirm but when I see comments about runtime checks, I'm concerned there might be things going on that make this cover only one domain rather than covering these related domains. Even if its a matter of "programming right" to fit all domains, that just seems brittle.


They might want to reconsider that website. Nothing wrong with the design, but it's 100% javascript. It's not like it's a web app or anything: plain HTML and CSS would suffice.


Can someone give a quick TLDR what are the main differences to Rust?

EDIT: Or rather, what are Vale's selling points to someone using Rust?


The author posted this on reddit [0] a few months ago:

Vale's main goal is to be as fast as C++, but much easier and safer, without sacrificing aliasing freedom. It does this by using "constraint references", which behave differently depending on the compilation mode:

    Normal Mode, for development and testing, will halt the program when we try to free an object that any constraint ref is pointing at.

    Fast Mode compiles constraint refs to raw pointers for performance on par with C++. This will be very useful for games (where performance is top priority) or sandboxed targets such as WASM.

    Resilient Mode (in v0.2) will compile constraint refs to weak refs, and only halt when we dereference a dangling pointer (like a faster ASan). This will be useful for programs that want zero unsafety.
Vale v0.2 will almost completely eliminate Normal Mode and Resilient Mode's overhead with:

    Compile-time "region" borrow checking, where one place can borrow a region as mutable, or multiple places can borrow it as immutable for zero-cost safe references. It's like Rust but region-based, or Verona but with immutable borrowing.

    Pure functions, where a function opens a new region for itself and immutably borrows the region outside, making all references into outside memory zero-cost.

    "Bump calling", where a pure function's region uses a bump allocator instead of malloc/free.
[0] https://www.reddit.com/r/ProgrammingLanguages/comments/hplj2...


I'm thinking about code/library sharing. It's useful to know from the interface what's the requirements regarding ownership and mutability. I tend to think that code sharing would be harder in vale (although I didn't read much about it).


The author also compares and contrasts at the end of a blog post: https://vale.dev/blog/zero-cost-refs-regions#afterword:borro...


One core difference seems to be that references are checked at runtime. In Rust you have to be able to prove that a reference to a value can't outlive the value, but Vale is more trusting, and simply crashes if a reference outlives the value in practice.

You have constraint references, which crash the program if they still exist when the value is freed. And you have weak references, which turn into null. Constraint references can optionally be compiled into raw pointers.


Related from a few months ago: https://news.ycombinator.com/item?id=23865674


The project has a discord channel for those interested https://discord.gg/SNB8yGH


Isn't this the same language heavily used by ElementaryOS devs?


You’re thinking of Vala (https://valadoc.org/) not Vale.


Ugh ridiculous naming then


nope, that's Vala


Interesting! Is this supposed to be an easier-to-read Rust?


I don't want to sound mean but I think the world has enough algol clones already :/


I admit that I also automatically lose a bit of interest if I see "familiar C-like (algol-like) syntax" as a supposed feature in a new programming language, there is also no point in being original just for originality's sake, and the space of possible syntaxes for programming languages has been pretty well explored. But in terms of semantics this really doesn't look much like an "Algol clone" at all, rather it seems to want to explore the conceptual space between imperative and functional languages where Rust has been breaking genuine new ground, and which clearly has more room for experimentation.


Having spent time with languages that use c syntax and ones with whitespace based syntax, I much prefer the latter. I think c syntax is seen as plus not because it is better, but because it is familiar to more people. See also ReasonML


Just for a dissenting opinion, I use both types of scope syntax on a regular basis and prefer the Algol-ish one. I personally find it easier to reason about where code blocks begin/end when they're explicitly marked. Makes moving code around easier, too.

Note that most of my problems could be fixed with enough tooling/editor plugins/IDE magic, I just don't use them.


Why do we need to denote a function, with fn, def, function, ...? we can do better than this.


  main() {
    something() {
      // implementation block for something(), or anonymous scope?
    }
    something() // first or second execution?
  }


Perfect answer;


something() { } obviously is a definition, something() is obviously a call.

Why is that hard for you?


or is it obviously executing `something()` followed by an anonymous scope with a less-common indentation pattern, e.g. equivalent to

  var x = 1;
  something();
  {
    var x = 2;
  }
  // x is now 1 again, as the scope
  // containing the new variable x=2 is gone
  something();
or, as mentioned in another comment, is it passing an execution block to `something()`? (I didn't mention that in my original since it's a bit less common of a language feature, but it's valid too)

enforced semicolons and/or significant whitespace/newlines can resolve this in some cases, but it of course depends on the language. and diverging too far from common langs has its own downsides too, especially where you're changing the interpretation of what are already common patterns.


That is actual Ruby syntax for an invocation of something, so it may or may not be obvious depending on what you're used to. (Though most Rubyists would drop the ()s)

Also, as mentioned, in many languages it is valid syntax for an invocation of something, followed by a new scope.


Why would a programmer ever worry about "ownership"? I've never had to do that.


It's a memory safety concept.

In languages like C you allocate memory yourself, and then you have to make sure to free it up too. This is error prone.

In garbage collected languages like Java there is a garbage collector who figures out when is a piece of memory no longer in use. This has run-time cost.

The third possible way is to make the ownership of that part of the memory explicit. This way the compiler can check memory safety at compile time.

There is a much better description of this in the Rust book: https://doc.rust-lang.org/book/ch04-01-what-is-ownership.htm...


Then you haven't ever written a line of C/C++ I presume? Ownership is just a formalization of the way almost all resources (including memory) are managed in C: A single function has the responsibility of doing cleanup; that responsibility can be moved around to other functions of course, but only one ever has that responsibility at a time.


>Then you haven't ever written a line of C/C++ I presume?

Correct... I'm a Pascal programmer. I create objects when a program starts, delete them at the end. I've never really had any of the pointer problems that seem to plague C/C++.


> Correct... I'm a Pascal programmer. I create objects when a program starts, delete them at the end.

Then I guess you should instead say you’re a ‘single-shot, compiler-like batch job executable’ programmer. I imagine that colours your perception more than the choice of programming language itself. If you were to write a program that has to manipulate deeply nested data structures, or a long-running service that has to acquire and dispose of an unbounded number of various kinds of resources over the course of its runtime in a deterministic manner, you’d start longing for an ownership system very quickly.


I'm trying to imagine when I'd worry about ownership. The only thing that comes to mind for me is having to provide buffers for string results from OS calls, etc. In Pascal, you can return a string, assign it, append it, etc.. and never have to allocate/dispose of them... its taken care of under the hood by automatic reference counting.

I'd really like to understand this... what specific type of thing were you doing when you found yourself longing for an ownership system?

[edit] It occurs to me that anytime I have objects, they eventually trace their ownership to a single global variable. Perhaps that has something to do with this?


The examples are numerous but for example databases or GUI frameworks. You end up in situations where a parent needs a reference to a child and child needs a reference to its parent and it gets unwieldy.


It is an alternative, in many cases, to automatic reference counting. It's sort of like saying "why would I need automatic reference counting? I have a tracing garbage collector."


That makes sense... thanks!

So, in general, if something is too big to fit to easily be copied, or stored in a single variable of known size (like strings, dictionaries, lists, trees, etc.) its either GC or ARC to the rescue?


No problem. :)

That's not quite it. It's more about compile time vs runtime. Arc and GC are runtime tracking of "when can this be freed." Ownership is compile-time tracking of the same. Rust has refcounted types in its standard library, but you only need to use them relatively rarely. Here's a small example with not too much syntax. The core idea:

    fn main() {
        // String is a heap allocated, mutate-able string type
        let s = String::from("foo");
        
    } // s goes out of scope here, and so will be freed here;
      // the compiler generates the code to free the heap allocation at this point
s is the "owner," and there's only one, so this can be tracked fully at compile time.

Onwership can move, too. Let's introduce a function:

    fn foo(bar: String) {
        // body would go here, elided to keep this simple
        
    } // because bar is of type "String", this function "takes ownership" of the
      // value passed as 's', and so the compiler will generate the code to free
      // its heap allocation at this point

    fn main() {
        let s = String::from("foo");
        
        // we pass s to foo...
        foo(s);
        
    } // ... and because we did, it no longer exists in main, and the compiler
      // knows this, so no code to free the allocation is generated here. if
      // it were, this would lead to use-after-free.
All of this is trackable, 100%, at compile time. So there's no runtime overhead here at all.

But imagine that foo() creates a new thread inside of it, and we want to access said string from inside that thread, as well as from within main. Now we have multiple owners, not a single one, and it's not possible at compile time to know when the string should be freed. The simplest solution here is to reach for reference counting, and you'd end up with Arc<String>, that is, a String that's wrapped in an atomic reference count.

(Also, references don't take ownership, so if you didn't want to have foo free its argument, you'd have it accept a reference, rather than a String directly. This would let you temporarily access the string, without freeing the backing storage when the function's body has run its course.


Wow... that's quite a bit more than Garbage collection... thanks for explaining it.


I assume you write fairly "static" software that knows its job pretty well in advance. Ada programs tend to use a similar allocation strategy.

A lot of programs written in C and C++ are very dynamic, and may need to massively increase or decrease the number of objects they use while running. For example, consider an audio workstation: when you first start it, it has very little to do, but the user can add literally hundreds or thousands of tracks, each with dozens of effects and modifications.

Each of these features is composed of many objects. Ownership comes in when these objects have complex relationships, where one object may need to be referenced by many others. When it is time to clean up some of these objects — for example, when a user deletes a track in the audio workstation — you have to know which of these shared objects to delete. Ownership is one way of modelling that problem.

Languages that talk about ownership try to optimize the "best case" where an object has exactly one owner, because then it's very simple for the compiler to free that object. The trick is that you have to prove that no one else is using it, or else you have memory errors.


> Correct... I'm a Pascal programmer.

Real men don't use pascal ;)

Have you ever closed a file?


So everyone else is focusing on the memory allocation side of ownership. The other side to this is aliasing / borrowing / handling references and mutability.

In Python, you might have a `list` that you built up and them pass its data to multiple functions. You assume these functions are do not mutate their arguments and you keep passing the same `list` over again. In this case, the caller is controlling the invariants of the `list` and therefore "Owns" it.

Deep down, one of those functions needs to mutate a `list` to do its calculations. Where the mutation is happening, it assumes it is the "Owner" and can modify the `list` without violating any invariants. Unfortunately, a "Borrowed" list was passed in and now the caller's invariants are violated.

Rust's ownership model helps to catch this kind of problem at compile time. By default, everything is single-owner though you can change that with a `Mutex`.


Yep. Even when you're doing something as simple as iterating over some data structure, you have to remember not to mutate it. Rust enforces that, but Python just either corrupts the data structure / iterator, or if you're lucky it'll throw an exception.


It's an object lifecycle concept.

You worry about ownership if you have a resource and need to think of when to free/dispose/close it.

You worry about ownership if you have an object that many have a reference to, and some of the references shouldn't keep the associated resources alive, and some other of the references should keep it alive. Those that keep things they referenced alive are (shared) owners.


Garbage collected languages (Java, JS) have significant overhead, and unpredictable latency spikes.

Scope-based memory management (C++) has copying overhead and destruction overhead, and a whole lot of complexity to avoid those.

Ownership-based memory management removes most of the overhead associated with copying and destruction in a less-complex fashion.


Note C++'s semantics are akin to ownership, it just has different defaults and none of the safety guarantees. A std::vector is owned by the current scope, ownership can be passed around using std::move and references can be taken using &.


Bad garbage collector implementations, but not GC in general. malloc/free can have latency spikes too.

Also not every programming problem in the world is a memory-constrained hard real time problem - in fact, I would say almost all of them are not.

Memory ownership is certainly an interesting thing to have in your arsenal, but a whole class of problems (especially anything with a graph or modestly complex interlinked data structure) can't easily be reasoned about using this model, or you end up copying stuff anyway to get around the borrow checker.

Having said all that, I'd really like to see a language that lets you use a garbage collector 99% of the time for ease of use, but allows you to use other models for the small sections of the code which need it or within certain designated real time threads.



I beg to differ. For a sufficiently large time scale, every programming problem is a memory-constrained hard real time problem.


D let's you do that if you want.


Ownership is a conservative restriction of allowed behaviours (i.e. rust still has ref counts), the three approaches don't do exactly the same job




Valid question. if you started programming dynamic languages like Python you don’t have to worry about memory control.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: