Hacker News new | past | comments | ask | show | jobs | submit login
The Vale Programming Language (vale.dev)
200 points by memorable 8 days ago | hide | past | favorite | 90 comments





Vale lead here, thanks for posting!

For those interested in the PL space, here are some of the shenanigans we've been up to in Vale:

* We added "Higher RAII", a form of linear typing that allows destructors to have return types and parameters. [0]

* We just yesterday finished the first prototype of deterministic replayability, which will allow us to capture all inputs to a program (including all network data, user input, file input, etc, and eventually, even thread orderings). No article yet, but our docs on it are pretty approachable. [1]

* Last month, we finished the first milestone of "Fearless FFI", which lets us call into our C code without fearing it corrupting our Vale data. Later on, we'll be adding automatic sandboxing, either via subprocesses (using IPC for FFI) or wasm2c (which should be a lot faster).

We've also got some interesting plans for concurrency. We've found a possible way to make memory-safe and data-race-safe structured concurrency even easier, [2] which I hope to get prototyped before the year's end.

As a fan of C++ and Rust, I'm most excited about having the RAII and flexibility of C++ with the memory safety and data-race safety of Rust, while being easier than either of them. We seem to be succeeding, hopefully that continues!

I also want to emphasize that Vale is a work in progress, and we endeavor to be very clear on the site about which parts are implemented and which parts aren't.

We couldn't have gotten this far this without our sponsors' support, so big thanks to all of them! [3]

[0] https://verdagon.dev/blog/higher-raii-7drl

[1] https://github.com/ValeLang/Vale/blob/master/docs/PerfectRep...

[2] https://verdagon.dev/blog/seamless-fearless-structured-concu...

[3] https://github.com/sponsors/ValeLang


Vale definitely has some interesting ideas! Time permitting I strongly encourage you to figure out writing down some semi formal full descriptions of some of these ideas to help share them. They are fascinating takes on ideas.

Hey! Cool language. Well done

The key mechanism for memory safety, generational references, is described here: https://verdagon.dev/blog/generational-references

IIUC it's basically a form of memory tagging in userspace, but designed to be safe (64 bits, very low risk of collision) and fast (language is designed to avoid tagging whenever possible: owning refs don't need them; inline objects reuse the gen of the parent; static analysis).

I buy that this can be almost as safe as reference counting or GC, and as easy to use, but faster. But not sure I understand how inline objects work, and how effective static analysis will be is important.

Comparisons to other languages (C++/Rust/JS) are here: https://vale.dev/comparisons


Good summary!

Generational references were less than half the overhead of RC as of last benchmark. [0]

Re inline data: A piece of inline data will (most often) share the generation of its containing object. A generational reference can include an offset to the parent's generation, which we can use for generation checks.

Inline data should make it even faster, because then we can control our objects' layouts and use the cache more efficiently, and not be chasing pointers all the time. I think that's the biggest advantage of generational references over RC and GC.

We're also building regions into the type system, which can be used to statically temporarily freeze areas of memory (similar to the Rust borrow checker) to eliminate generation checks. [1]

Another very experimental aspect we're prototyping is "hybrid generational memory" which can temporarily lock an object (similar to a RefCell) to elide a lot more generation checks [2] but it's too early to promise it will work.

[0] https://verdagon.dev/blog/generational-references

[1] https://verdagon.dev/blog/zero-cost-refs-regions

[2] https://verdagon.dev/blog/hybrid-generational-memory


But

1. This technique prevents undefined behaviour, by halting whenever an invalid dereference is detected. While Rc is a garbage collection algorithm. So you are comparing apples and oranges, no?

2. Rc is a bad gc algorithm. How does this compare to a good quality, generational, incremental gc?


> A generational reference can include an offset to the parent's generation

I see, thanks for the clarification. Then interior pointers will be larger than normal pointers, like all generational pointers? While the interior object itself doesn't have a generation, avoiding that overhead. Makes sense I think.

All the larger pointers do make me worry about increased memory overhead, though (kind of the reverse of the x32 ABI which has half-sized pointers; I think 5-8% is the quoted perf difference there). Do you have benchmarks of memory overhead - looks like the link has throughput?

Regardless, it sounds like the other features you mention should help with both forms of overhead, so it will be interesting to see how much.

Cool project btw!


Thanks!

The increased memory overhead would seem to be a problem, but in the programs I work on at least (games, web servers, compilers), the vast majority of references in a program are owning references which aren't fat pointers. Non-owning references are very short-lived and on the stack, so the memory overhead shouldn be pretty minor in practice.

Also, if we want to save a little more space, we can reduce the generations to 32 bits (we're leaning this way in fact, as it opens the door to some interesting other features).


> To dereference a generational reference, we do a "liveness check" to see whether the allocation's generation number still matches our reference's target generation.

How does this work with threading (use after check)? Or are allocations always limited to a single thread?

> This will safely halt the program

So it’s not statically memory-safe. That’s not hugely attractive I have to say.


> So it’s not statically memory-safe. That’s not hugely attractive I have to say.

Ref-counting with mutable data (which this seems to be replacing?) works the same way in Rust. You need to take an Rc/Arc of RefCell, and calling borrow/borrow_mut on RefCell can fail at runtime.


Only if you use RefCell instead of, say, Cell or Atomic* or RwLock or Mutex.

I had tracing GC in mind.

Interestingly the article misses the comparison with Go, which quite similar to it, and in fact I'd argue - easy, fast, safe.

Go is none of those three. It looks easy on the surface, but it contains a ton of paper cuts and nonsense, people do what looks logical and then every expert Go programmer on HN point and laugh at them. It's not even as fast as Java (for a comparison look at how much careful engineering goes into Java's low latency garbage collectors while Go just hyped the shit out of their inferior version which they had to replace in due course), and calling out from Go is dismal in performance. As for safe, are you kidding me? Go has all this green thread stuff with not a single safety mechanism by default. It's all up to the programmer to implement it by themselves, and they don't get any help, they can't even say that something is immutable. It even regresses on C in that there is no way to declare an enum type. So if you have values representing a bounded set of choices you can't enforce it.

The worst thing is that this travesty of a language will win and lead to decades of stagnation.


Go is easy, decently fast, and reasonably safe. It's a good tool for many problems. Rust is complex, extremely fast, and notably safer than Go. It sounds like Vale is trying to push the envelope of what is possible, which to me, means having a simple yet expressive, extremely fast, and extremely safe language. Go makes compromises in all of those areas (as do many other good languages). I'd argue that Go isn't really near the boundaries in any of those areas, so it's not surprising that they didn't use it as a comparison to illustrate those three aspects of a programming language. It seems Vale's goal is to have their cake and eat it too.

Agreed that Go trades off some performance and safety. I’m not sure what “easy” means exactly, but IMO Go has the shortest learning curves of any language whether “time to productivity” or “time to mastery” and it also has the fastest developer velocity IMO (and I’m coming from Python, which has that reputation as well). It seems like Go is at the boundary for anything worth calling “easy”.

People feel like they have a short learning curve and they master Go. Greenfield programming in Go is an amazingly empowering feeling. The issue is they don't see the issues they create until something implicit, a default value after a refactor or any of Go's paper cuts lead to a production issue.

One central issue is that anything touching concurrency and Go is a tangle of nastiness with implicit effects. But apparently Go is "natively concurrent" and everyone simply prays that the race detector is good enough.

https://eng.uber.com/data-race-patterns-in-go/


Yeah, Go doesn’t have top safety, but it lets you move a lot faster than safer languages. You can spend the extra time finding and fixing most of these bugs and still have time to spare compared to safer languages.

Parallelism does need some design to avoid making a mess (minimize shared mutable state and lock what you can’t minimize and you’re fine). Note also that Rust and other safer languages don’t help much because most shared mutable state is remote and accessed by multiple application processes (e.g., an object in an S3 bucket, a file in an NFS volume, etc). With Rust or with Go, you need to test for these kinds of bugs, but with Go you’ll be able to start your testing sooner (with Rust you’d still be writing the app code).


In what respects is Rust safer than Go? They are both memory safe, and both have safe concurrency options (in addition to unsafe ones).

As we saw in a recent back-and-forth you can come to believe Go is really safe by just having incredibly conservative expectations. If your expectations are low enough Go can meet those low expectations, and you may see a language like Rust as not adding anything since you'd never express anything that's safe (and perhaps faster) in Rust but would be unsafe in Go.

I’m sure you do have a substantive point to make, but all that you actually do in the text of your post is throw shade at Go programmers. I can’t find the back and forth to which you refer.

I'm aware of the potential for data races to lead to data corruption in Go, although as an exploitable issue it seems a bit theoretical. There are also soundness bugs in Rust from time to time (as for example with the whole Pin saga).


> I can’t find the back and forth to which you refer.

https://news.ycombinator.com/item?id=31703732

> I'm aware of the potential for data races to lead to data corruption in Go, although as an exploitable issue it seems a bit theoretical.

Go's language design says it doesn't care about this problem, Rust's language says it eliminates this problem. If you see these as basically the same because you also don't care about the problem, it seems like your original claim (that they're both safe) was simply wrong.

> There are also soundness bugs in Rust from time to time

To be absolutely clear here: Data races in Go are not "bugs". Go is specifically designed not to even be safe if you have a data race which touches compound types. Undefined behaviour, you lose, game over, the language designers have nothing further to say on the matter. If you raise a ticket saying "I had this race and now everything is on fire" in Golang it will get a WONTFIX or whatever the equivalent is.


I think the issue is a bit more subtle than you're suggesting because Rust (as you know) only promises to eliminate the problem in safe code. Go doesn't have the same type infrastructure for stopping you from accidentally sharing data between multiple threads, so it doesn't have the same sharp distinction between 'safe' and 'unsafe' concurrency options. But it's also not as if Go's concurrency story is just 'share data and hope for the best'. Idiomatic Go code should be using channels.

In short, both languages let you write memory unsafe code if you want to, but both discourage it and make it easy not to do so in most cases. Rust discourages it in a more bondage and discipline kind of a way. But it still falls to the programmer to verify the unsafe kernel of their application to their satisfaction. That is, Rust provides a 'here be dragons' warning by forcing the use of an 'unsafe' block, but the language itself doesn't offer any assurances about the correctness of an application's kernel of unsafe code. Modern async Rust doesn't uniformly discourage the use of idioms that make use of unsafe code. See for example the discussion of stack pinning here: https://rust-lang.github.io/async-book/04_pinning/01_chapter... ("A mistake that is easy to make is forgetting to shadow the original variable since you could drop the Pin and move the data ... (which violates the Pin contract).")

I'd advise against inferring that people "don't care" about these problems, etc. etc. This kind of personal stuff just makes it harder to focus on the technical details.


Unless they've made a recent radical change, data races on multiword values (e.g. interfaces and slices) are unsafe in Go. In other GC languages (e.g. Java and C#), data races don't compromise memory safety, but Go made different implementation choices.

Why on Earth is Go plugged into any discussion of languages operating on a much lower level? Go is closer to NodeJS than to Rust from every point of view, it is not C 2.0..

I get the frustration but tbh comparing Go to NodeJS feels just as absurd.

I wish JavaScript had the memory layout guarantees that Go has, that would make it a lot easier to work around memory-based bottlenecks.

And no, the fact that I can use low-levels bindings in NodeJS is not at all similar.


Go has GC and a runtime, which this might not have?

For anyone that was confused like me, this is unrelated to the Vala programming language [0] which is a more traditional OO based on GLib. E vs A.

Looks very interesting, and the blog has very nice writeups, I found the article on mutable/constant variables syntax[1] particularly clever. Tho I'm unconvinced of the final decision it's certainly a breath of fresh air.

[0] https://en.m.wikipedia.org/wiki/Vala_(programming_language)

[1] https://verdagon.dev/blog/on-removing-let-let-mut


Just had a thought. I wonder if Vala would of been better off as just a project adding GLib support to D and just maintaining that link. Vala is amazing at what it does but then you get an even richer language in my opinion to work on top off with less overhead. Not to knock on Vala I appreciate it for what it is. It is also the main reason why ElementaryOS has some amazing applications out of the box.

The language creator(s) don't appear to have pretentions of displacing the entire world. Look at this quote:

"Vale is powerful enough to use, and it feels really good to finally use Higher RAII in a real-life program. It's an incredibly versatile and valuable pattern, one that I hope Vale will bring into the mainstream!"

Bring [Higher RAII] to the mainstream. It's a sandbox to experiment with a composition of ideas in the hope that some of these ideas influence the 'mainstream' including, perhaps, D.

The number of new, fast, safe (for some value of safe) native languages that are appearing is amazing. I think this is down to powerful tools for creating native languages. We're in a new era of language development; the yawning gulf between C/C++ and the world of managed/scripting languages is being rapidly filled. This shouldn't be discouraged with "why not just make my preferred thing better?" Eventually it will.


GP that you're replying to is talking about Vala, not Vale

Interesting to read about variable mutability here:

https://verdagon.dev/blog/on-removing-let-let-mut

Originally there was let and let mut like Rust. But that was changed to simple assignments, followed by the compiler checking for subsequent reassignment with set.

While I can see that might make for cleaner code while still allowing the compiler to get on with its job, surely one of the benefits of explicitly immutability is to stop the coder accidentally letting off foot guns. If simply setting a variable after declaration makes it mutable, it feels like it could open the door to data in a code block being allowed to change behind the scenes when the assumption in that block is that it is fixed.

I'd be interested to know how this works in every day practice and possible unintended effects.


Agreed. Seeing const vs let when reading / writing TypeScript gives me a clear, explicit indication of intent. I like having this be explicit.

I would definitely prefer swift's 'var' or JavaScript's 'const' here.

The solution in the link adds to cognitive complexity. Imagine you had bad code (Worst case scenario; which always happens). A function with 400+ lines of code. You would have to scan the entire function just to figure out if a variable is mutable then you would have to force yourself to remember _all_ the variables that are mutable when evaluating behavior.


Agreed, I otherwise love the language just at face value but this was a big "ehhh" for me. I'd rather be boxed in and have to opt-into increasing bug surface area.

Personally, I don't like these two kinds of syntax for declaring and assigning a mutable variable, because if code is moved around - which happens all the time - I have to switch the syntax for declaration vs assignment. Same thing with Go's := and =.

The vast majority of variables in your program won't be mutable or be changed around so why add the mental burden of mutability on every single variable just for some tiny minority of cases?

It doesn't have to be a mental burden.. Just write all your variables as mutable then let the ide automatically correct them to immutable. Ex: "Warning: This variable is never mutated. Would like to make it a const? <Fix>"

Let's say it is a mental burden. I'd rather have the burden when writing the code over when reading the code. (https://news.ycombinator.com/item?id=31823991)


Related:

The Vale Programming Language - https://news.ycombinator.com/item?id=25160202 - Nov 2020 (171 comments)

The Next Steps for Single Ownership and RAII - https://news.ycombinator.com/item?id=23865674 - July 2020 (38 comments)


Given the possible confusion with Vale[1] and Vala[2], I suggest to rename the language to some new unambiguous name asap in order get more visibility.

Otherwise, it looks very promising!

[1] https://github.com/project-everest/vale [2] https://en.m.wikipedia.org/wiki/Vala_(programming_language)


Github may show a warning when creating repo about all possible name collisions. We see this problem often. If anybody from GH is listening :)

GitHub already has a feature to check if any public GH repo has this name. They've called it search.

I don't think this is really a problem. I know Vala and I don't see a problem with confusing the two.

It's because you know Vala that you don't see the problem.

I didn't confuse it with Vala, but with Val:

https://github.com/val-lang/val


Sorry but I don't quite get it. It seems that generational reference is a way of memory tagging that avoids users from use-after-free, while reference counting is a way of tracking the number of references to an object and free it when it is no longer reachable.

It seems to me that generational reference isn't replacing reference counting as it is not deciding when to free the object? Perhaps this is more like a weak reference or something?


They are very different in those respects, correct. But I think the point here is that both are ways of achieving memory safety.

So the non-owning references are kind of like weak reference but not implemented with Rc? I guess I now understand its purpose.

Wondering how does this deal with multithread though. For example, the object A might be dropped in thread 1 (increments the generation number in thread 1) and accessed in thread 2 (reads the generation number). How do they guarantee an error in this case without using atomics? (although this might not be a valid question because you can't really say A is dropped before thread 2 accessing A without some sort of synchronization...)


Question: it looks like the generational memory strategy prevents use-after-free bugs at runtime, which is nice, but a language like Rust prevents these kind of bugs at compile time. Am I missing something, or is Rust fundamentally safer than Vale?

Vale compares its safety to Rust's on a few occasions[0][1] and the exact opposite claim is made, i.e. Vale is safer than Rust.

[0] https://verdagon.dev/blog/hybrid-generational-memory#afterwo...

[1] https://vale.dev/comparisons#safe


> C++ is completely unsafe. You can largely trust Javascript code that someone else wrote.

This is an interesting claim since JavaScript runtimes are written in C++. I suppose we do trust them, but only after wrapping them in all the other layers of safety we can get our hands on.


But when you start with that, it's turtles all the way down. Who guarantees that the Rust compiler is correct? Who guarantees that a kernel implements syscalls correctly? Who guarantees that RAM bits don't flip? Their point wasn't about the implementation of JS (which can always be improved), but on the definition of the language.

Ah, but you actually do need to care about all that stuff too if you're going to run code someone else wrote in your browser, or at least if you're a security engineer for that browser. That's how 0-days get you.

It's not a problem for your own programs because you're not trying to hack yourself, but if you're an aggressive enough developer you will find bugs in your compiler. In fact, this is a great reason to have full test coverage.


Good question!

If one defines safety as a lack of UB or vulnerabilities, then I'd say Vale's a bit safer than other languages like Rust.

I say this because:

1. There are no unsafe blocks in Vale.

2. Vale memory is decoupled from native memory, it only passes messages and handles between, and uses a different stack. [0] This prevents accidental bugs in unsafe code from corrupting data from safe code. We just finished our proof-of-concept of this last month!

3. Building on that, we could automatically sandbox any native code for a module in theory, with either subprocesses or wasm2c. [0] (Note we've only just started on this part.)

I'd say that's safer than Rust, where unsafe code can undermine the code around it.

I'm particularly excited about how this might make it safer to use dependencies; #1 and #2 protect from accidental corruption, and #3 could help protect against malicious corruption and certain kinds of supply chain attacks. We're even tossing around a potential #4 to add permissions, so dependencies must be whitelisted to be able to access network, files, etc.

We're also thinking about relaxing the above 3 restrictions on a per-module basis if we can do it in a way that doesn't compromise the safety of the ecosystem (note that these relaxations are not implemented yet):

* Instead of full sandboxing (3), we can rely on the decoupling (2), if we trust a dependency's intentions.

* Have operators to skip generation checks for extra speed. [1] These operators would be by default ignored for dependencies. It's unclear if this will help, as the combination of regions [2] and HGM [3] might combine to eliminate the vast majority of generation checks. We'll see!

* Perhaps add a keyword for blocks, to ignore all generation checks within (maybe `unsafe`?). This would also be ignored by default for dependencies.

Rust must always allow unsafe blocks, because libraries often believe it's necessary for their performance, or sometimes for just working around the borrow checker. Vale would default to disallowing it (and sandbox FFI), and it would be much more noticeable (and suspicious) if a dependency said it wouldn't work without unsafe; they can't just sneak it in there like they can in other languages. These adjustments are still under consideration (thoughts are welcome!) but as of today, there's no unsafety in any Vale code.

To summarize, Vale has stronger protections against memory unsafety and UB, so I'd say it's a bit safer. Though, if one wants to prevent all bugs at compile time, then one shouldn't be looking at Rust or Vale which often panic, but instead look at languages like Pony (which doesn't even have panics, hence its amazing uptime) or proof languages like Coq.

Hope that helps!

[0] https://verdagon.dev/blog/next-fearless-ffi (still a draft, read generously!)

[1] https://vale.dev/guide/unsafe

[2] https://verdagon.dev/blog/zero-cost-refs-regions

[3] https://verdagon.dev/blog/hybrid-generational-memory


Why would a run-time check make anything less safe than a compile-time check, if they perform the same check?

In my opinion, a runtime check is less safe than a compile time check. In both cases, the language is free from undefined behavior when using a freed reference. However, while a language with a runtime check might not have UB, programs written in this language might have UB. What happens in the program when the use-after-free happens? Depending on how the error is generated and propagated / handled, the program could end up in an undefined state. Programs written in safe Rust, however, will never have such undefined behavior

Rust with RefCell will have such runtime errors, though. (Likewise, Rust using the indexes-in-an-array pattern can have use-after-free, but the harm is limited.)

Also, IIUC this is not actually UB in Vale: it's a guaranteed error.


Yes it’s a guaranteed error in Vale, but the error may cause UB in the application logic. This won’t happen in Rust because the program won’t compile if a use-after-free is possible.

According to your definition everything is undefined behaviour. Imagine a simple program that successfully halts if it's input is empty and returns an error if it's input is no empty.

C adds a third state undefined. When you reach this state you know nothing about what is happening in the program.

Now Vale adds a third state called "memory error" which is just a refined error and well defined. This means that if you have handled the error case, you already handled the memory error case even if not in a satisfactory way.

What is strange to me is that you consider the former okay and the latter undefined behaviour in the application logic when it just means that an additional exit state has been added which a highly defined behaviour.


I wouldn't say it causes UB. It causes the problem to halt, deterministically, and safely - but maybe annoyingly.

Yes, it's not as good as a static guarantee. But there are tradeoffs where it makes sense. Again, RefCell in Rust does the same - it's a useful technique.


Because at runtime you find out about the bug only when it occurs. A compile type check fails to compile the buggy code.

A runtime check can be more precise; a compile time check has to reject more potentially unsafe code, which means people might turn it off.

I think there's a lot of possibility in runtime assertions that fail as fast as possible, ie the instant the program enters an invalid state rather than the later time you get to the part of the code the assertion is written in. Don't know if there's any systems like that, it's just something I thought of.


I share this perspective, it's why Vale chose generational references over a borrow checker.

In practice, the borrow checker has to reject a lot of perfectly fine patterns, such as observers, dependency injection (the pattern, not the frameworks), delegates, backreferences, many forms of RAII [0], graphs, etc. Sometimes, the workarounds lead to more complexity, less flexibility and decoupling, and more refactoring which would be unnecessary in other languages. This is likely why GUI is difficult with the borrow checker, and why one has to bring in frameworks to compensate.

The borrow checker can prevent certain kinds of logic bugs at compile-time, which might mean a release lets 9 bugs into production instead of 10 or 11 (comparing to a safe language with a strong type system). However, that tradeoff might not be worth it, depending on the domain. Flexibility and decoupling can be more important, at least in the domains I've worked in (roguelike games, web servers, apps) and the size of the program. There are domains where it's better to add more complexity to detect even more bugs at compile-time, that's where I'd choose Rust (or perhaps GC'd FP languages, which prevent even more bugs). Just my two cents!

Note that this is only a problem if a programmer is a bit too religious with borrow checking; in practice Rust offers reference counting which can nicely avoids these problems.

One of Vale's principles is to move checks up to compile time, but prefer not to when it causes too many architectural problems or "infectious leaky abstractions" so to speak. This is also why Vale will be using coroutines (similar to Go) instead of async/await.

It's also why we're adding a region-based borrow checker, which is opt-in and doesn't impose constraints on its callers. [1] If we do it right, it should give Vale a lot of the performance benefits of Rust's borrow checker, but without the complexity and architectural constraints.

[0] https://verdagon.dev/blog/higher-raii-7drl

[1] https://verdagon.dev/blog/zero-cost-refs-regions


Ah yes, I see. Strictly speaking, both are exactly as safe, since safety is about whether unsafe actions can be done. It does not matter at which point the unsafe action is stopped.

But it could have some implications for software stability (compile time being obviously better) and productivity (run time being obviously better).


> It does not matter at which point the unsafe action is stopped.

It does in practice. With a compile time check the issue simply cannot happen, and there is no need to deal with it in the code or at system level.

With a runtime check the issue may happen, but will be detected when it does. Still, the choice then is either to crash or to deal with it with some runtime recovery action. Either way has some cost: the system around the executable must deal with more crashes, or the code gets more complex (and sometimes more brittle, even if the intention is the opposite).

Only the compile time check makes an issue really go away. This is to me the attraction of strong typing and any form of compilation time check. There's a price to pay too in accepting the related constraints: type checking must pass. So there's a cost here too. For complex or sensitive applications I personally much prefer this upfront cost.

But it's definitely not the only way: Erlang has runtime type checks, but a very good runtime error handling framework (the reference?), and it works well. Still, most environment relying on runtime checks are not at this level.


>Only the compile time check makes an issue really go away.

The issue that goes away in both cases is the issue of unsafe memory access. There are disadvantages to runtime checks, as you mention, but there is no difference in the level of safety achieved.


You need to consider not just the memory access itself, but the consequences of preventing it.

With a compile time check, there's a development cost but absolutely no runtime consequence.

With a dynamic check, sure the memory access is detected and blocked. But if you stop there the application crashes, which may be completely unacceptable. In an embedded system such a crash may be as bad as the incorrect access itself.

More generally, the issue is not so much the incorrect access than its possible adverse consequences. With a compile time check, there are no runtime consequences. With a runtime check, there are still runtime consequences: either the impact of an application crash, or the extra error handling code and behavior to deal with the detected wrong access and mitigate it at runtime. Whether such consequences are acceptable or not depends on the context, but it's there and do make a difference with a compile time or static analysis check.


>In an embedded system such a crash may be as bad as the incorrect access itself.

I don't agree on this point. An incorrect access on an embedded system has the potential to cause all kinds of horribly subtle bugs involving memory corruption. A simple crash is generally much better.


Possibly, but I won't argue about hypothetical kinds of bad when a runtime issue happen ;) The point I tried to make was that you want neither for such use cases. Hence compile time verification (type base, static analysis, proofs for the most critical).

@verdagon Can you please explain https://verdagon.dev/blog/generational-references comment that: "programs dereference less than they alias and dealias" ?

Does deference means accessing a field of object ? Does alias/dealias mean creating/destroying another variable reference ?

If answer of above is true, isn't programs deference more than alias/dealias ? i mean, in:

var alias = shared_ptr_of_some_object; for (int i = 0; i < 1000; ++i) { alias.foo += bar_fn(alias.baz) }

is just creating 1 alias but defrences 1k times and above looks like very typical code.


Yep, thats what we mean with dereferencing, aliasing, and dealiasing.

That statement is referring to our sample program, a roguelike game that we use for benchmarking. As for why this is the case, I'm not sure! I suspect its because a lot of classes will alias/dealias objects without ever dereferencing them, such as List<T>, HashMap<T>, etc.


> C++ is a low-level language, meaning that theoretically, it cannot be beat; given unlimited time, you can optimize C++ code enough to beat anything.

That's just untrue right? You would have more room to optimize writing assembly directly?


Hand-optimizing assembly becomes too hard for humans past certain size, which is not yet too large for LLVM, GCC, or ICC.

OTOH Fortran (and, I suppose, something like APL / K / Q) can generate faster code than C++ because Fortran guarantees the absence of aliasing. In C or C++, aliasing precludes certain kinds of optimizations, and proving the absence of aliasing at a particular spot may be too time-consuming (if tractable at all) for the compiler to try.


I guess that statement is comparing C++ with other high level languages. Not assembly. After all, one can drop down to assembly from any language. But "theoretically" is not interesting. What's more interesting is what is practically possible. Even Python can be made as fast as assembly by writing everything in the latter inside a Python shell, but those are unrealistic comparisons. Practically speaking C++ is difficult to beat, in terms of speed, from other languages at a similar level of abstraction.

Yeah but the point would be, if you can beat C++ with assembly, then theoretically there can exist a compiler for another language which can compile to that same assembly, meaning C++ isn't at the theoretical limit.

Maybe it's a nitpick, but it gives me a little bit of pause to see a statement like this on a language website's page, since it seems to indicate a bit of a weak or sloppy understanding of computer science fundamentals.


Is it pronounced va'-lay [1] or vehl [2]

1. Vale: it's worth it, fine, in Spanish

2. Vale: an area of lowland between hills or mountains, in English


the first pronunciation is also a latin form of salute, akin to "farewell".

Literally "stay healthy" (same thing as "salute" which is "be healthy / safe").

Might be an apt connotation for a language that tries to make code less error-prone and less verbose.


Comparison of Vale with C++, Javascript and Rust, with detailed explanation of how it aims to do things differently in the three categories of speed, safety and ease of use.

https://vale.dev/comparisons


whenever I see a list of properties like this I always add 'choose two' in my mind.

In my experience it's generally easy that gets dropped, reading the site I don't think there was anything there that made me think this is easier to learn than other mainstream programming languages so I think that's the case here as well.


It just needs to be easier than Rust and C++, and more expressive than Go while being nearly as performant. If it comes with a good stdlib, even better.

Does it have higher order functions?

It does, currently in the form of IFunction1<R, P>, IFunction2<R, P1, P2>, etc.

We also just introduced variadic generics in version 0.2, which is the first step towards a unified IFunction<R, ...> interface.

Long term, we'll have syntactic sugar for it: func(...)R


Why are semicolons mandatory? Wouldn't be cleaner without this restriction, like in Kotlin?

A valid question! Semicolons and other forms of syntactic redundancy often enable much better error messages. We're leaning towards keeping them in Vale.

However, we can always change our minds later and make them optional, so it seems wise to just require them for now and revisit later.


Ave, Vale!

This just makes me incredibly sad when I see new programming language that does not even an tiny effort to improve syntax. Always the same old mess of {}[]() all over the place. And even ; to end the line. Sad.

What might be an example of syntax improvement, in your eyes? Python-style syntactic indentation? Haskell or Ruby-style optional parentheses? Kotlin . Scala-style lambda arguments that make functions meld with built-in statements? S-expressions for everything?

Yes! All of those, in any combination. And new innovations on top.

Litterally anything is better than dragging conventions from the 1960s forward; from a time where ASCII was all that was available.

Use Whitespace, Try unicode glyphs for different syntactic functions, try to approximate natural language, TRY NEW THINGS FFS.


Have you considered that people might actually like that syntax style? Also there's a massive benefit to familiarity, you can easily read code in most languages without knowing them because they look similar. There's a reason most people don't like to write (((((((((lisp)))))))). There's nothing wrong with exploring new things but right now all you do is ranting about the syntax without giving any examples of things that are "bad" and honestly it makes you sound a bit like one of those designers redesigning everything just for the sake of redesigning.

> And even ; to end the line.

Why do people even hate ; ? It gives you better error messages and it's no different from using . to end a sentence in spoken languages.

> Use Whitespace

When you make whitespace important the only thing you gain is that it forces people to format their code (see Python) but it also often leads to idiotic decisions (e.g. Nim not allowing tabs which are simply superior to spaces, there is literally not one reason to ever indent with spaces). C++ and co have the right idea, the only thing whitespace should do is separate tokens then people can format the code as they see fit.

> Try unicode glyphs for different syntactic functions

Now sure Unicode stuff can make things more readable in some cases but the big problem of Unicode is how you input those things. Your keyboard is basically ASCII, that leaves you with some workarounds like ALT codes or Julia's LaTeX conversion stuff. And those things are not supported on every platform/editor. Not to mention that for some things you need to install a new font or whatever.

> try to approximate natural language

What would you like to see? I don't see what you'd gain from that except mostly making things more verbose. Spoken languages are quite different from programming languages. As soon as you have a large block of instructions (aka a program) you naturally resort to some kind of structuring (e.g. making a list that someone else has to check from the top) and that will pretty much look like pseudo code already. Sure you'd do something like `list.add(foo)` instead of "Add foo to the list" but that's just because the former is much easier to parse and encode into rules for the computer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: