Hacker News new | past | comments | ask | show | jobs | submit login
About V, the language Volt is written in (volt.ws)
570 points by AndyKelley on Feb 6, 2019 | hide | past | favorite | 342 comments


Developer here. I was going to post this here in a couple of weeks after launching the product and creating a separate site for the language with much better information about it.

I'd also like to hear your opinion about not allowing to modify function arguments except for receivers. This is an idea I got that isn't really implemented in any language I know of.

For example: mut a := [1, 2, 3]

So instead of multiply_by_2(&a)

we have to return a new array (and this will later be optimized by the compiler of course)

a = multiply_by_2(a)

I think this will make programming in this language much safer, and the code will be easier to understand, since you can always be sure that values you pass can never be modified.

For some reason all new languages like Go, Rust, Nim, Swift use a lot of mutable args in their stdlibs.

You can still have methods that modify fields, this is not a pure functional language, because it has to be compatible with C/C++:

fn (f mut Foo) inc() { f.bar++ }

I'm not sure if I'm the target audience for this (low-latency trading), but here's my thought - code which would allocate in a fast path is a strict no-go for me, and this runs fairly close to that in a few regards:

> It seems easy to accidentally make allocating code allocate by changing some variable names (b = fnc(a) - oops, allocation)

> I would be extremely wary of trusting a compiler to realize what sort of allocations are optimizable and not - I already have problems with inlining decisions, poor code layout, etc in fairly optimized c++/rust code compiled by gcc+llvm.

Replace allocation with any other expensive work that duplicating a datastructure would result in, and you have the same story. I suspect many of the people that would be evaluating this compared to C/C++/Rust have similar performance concerns.

You need to stop pretending they are not globals. Just accept you work on one continuous memory patch and are simply passing pointers around. If you don't lie to yourself, you can't shoot your own foot when the compiler do not see your lie.

not all abstractions are leaky, many compilers/language grammars enforce the paradigm of local lexical contexts.

Playing a game without accept its self imposed restrictions whose player accepted voluntarily would be lying to oneself.

Thanks for your input. I'll do more research on this.

I think Rust and Swift's approach of approach of making the user explicitly annotate (both at function definition time and a call time) these kind of parameters works pretty well.

I think you're right that it can often be an antipattern. But there are also use cases (usually for performance reasons), and the real problem occurs when you are not expecting the modification to happen. If the parameter is annotated then it's obvious that it might be mutated, and less of an issue...

P.S. Looking forward to the open source release. This langiage looks pretty nice/interesting to me, but there's no way I would invest time into learning a closed source lanaguage.

I don't think it's possible to have a closed source language in 2019 :) It will be released with an open source this year.

Technically (and we are technical folks) it's a language with a closed source reference implementation. A serious language designer is going to specify his language, so that other implementations, closed or open source, are possible if not available. C is the ur-example, there are dozens of implementations, some proprietary and some free.

We aren’t in a phase where languages have multiple implementations right now.

I can’t think of any well-known newish language (created in the last 10 years, say) with multiple implementations. Rust, Kotlin, Swift, Julia, Dart... any others?

Go might be a counterexample with its cgo implementation, but that was built by the same team and I have the impression (maybe mistaken) that it’s fallen by the wayside.

I don’t know if this indicates a really new language development style, with less emphasis on specification, or if it’s just a cyclic thing and some of these new languages will gain more implementations as they get more established.

https://github.com/thepowersgang/mrustc is an alternative compiler for Rust.

Yeah, but no language of the ones the parent mentioned has a non-toy, non-personal-project alternative compiler.

Perhaps only Golang (go and go-gcc).

For all others, everybody uses the standard compiler. Even in Python, PyPy is not even 10% of the users.

Whereas in C/C++ and other such older languages there are several implementation (MS, GCC, LLVM, Borland, Intel) with strong uptake and strong communities/companies behind them.

JavaScript isn't quite as new as those listed, but it has at least 5 or 6 implementations.

Yeah, JS too.

Yes, go has a formal specification, which not only opens the possibility for alternative implementations, but, more importantly, allows for the development of tools like linters.

It's way harder to develop tooling for a language which is only defined as "what its compiler can compile".

Cool! Looks like that’s not quite ready for prime time, but I hope it succeeds.

While older than 10 years, it still kind of falls under newish, but D has three compilers.

Python has PyPy?

Python isn’t new! It’s almost 30 years old.

I was about to suggest Python 3, but it seems Python 3.0 released December 3, 2008. Barely missed it!

There's no reference implementation yet.

I had a small page about the language up. I don't have some of the most basic things implemented and figured out yet.

I’m very happy to see that you’re doing this. Keep at it!

This. Letting the user annotate it is better then enforcing a behavior that is adequate in some scenarios but awkward in others. For gamedev for example, generating copies at game loop is only acceptable for small objects like Vectors and if they're allocated in the stack for example. Even if the compiler optimizes it, it is better to express the intent clearly in the code.

I'm planning to optimize this so that no copies are created.

Yeah , but i meant that a = multiply_by_2(a) still looks like it is copying things even if it isn't. Let it be immutable by default and mutable with a keyword.

For what it’s worth, if you can optimize away the immutability (copying), I’d much prefer expressing myself this way, rather than using mutability.

In Ada, procedure and function arguments must be annotated with whether they are “in”, “out”, “in out”. “In” can never be modified, “out” has no initial value and “in out” has both initial value and can be modified.

Overall if you don’t know Ada I’d recommend that you take a look at the features of it. It has very similar design goals as your V.

Although you can certainly go the pure functions route, I wouldn't recommend it for performance.

There's a false dichotomy between functions and methods, which are simply (sometimes dynamically dispatched) functions with a special first argument. If you allow mutable first arguments, why not any argument?

Instead of the language deciding what's mutable and not, I'd rather have a const system like C/C++ to ensure that changes aren't happening behind the programmer's back.

Hello, this is an excellent language! Have been looking for something like this for a long time!

Re: "not allowing to modify function arguments except for receivers" -- maybe instead all fields const by default, but having something like an optional mut modifier?

A quick question, how does hot reloading work (with presumably AOT compilation and no runtime)?

(Perhaps there's a OS mechanism to change already loaded code in memory that I should know).

> For some reason all new languages like Go, Rust, Nim, Swift use a lot of mutable args in their stdlibs.

Both Rust and Swift require specifically opting into parameter mutation (respectively `&mut`[0] and `inout`) and the caller needs to be aware (by specifically passing in a mutable reference or a reference, respectively), only the receiver is "implicitly" mutable, and even then it needs to be mutably bound (`let mut` and `var` respectively).

Inner mutability notwithstanding, neither will allow mutating a parameter without the caller being specifically aware of that possibility.

The ability to mutate parameter is useful if not outright essential, especially lower down the stack where you really want to know what actually happens. Haskell can get away with complete userland immutability implicitly optimised to mutations (where that's not observable), Rust not so much.

[0] or `mut` in pass-by-value but the caller doesn't care about that: either it doesn't have access to the value anymore, or it has its own copy

> (and this will later be optimized by the compiler of course)

Do not underestimate this part. It can take a surprising amount of effort to get these things right and performant.

I wanted to ask OP about this part specifically, so I’ll write my questions here.

a) how do you plan to do this?

b) will the optimization still kick in if you name the return value something else than “a”?

c) what if “a” is an argument passed to the function that calls “multiplyBy2”? Then doing an in-place update would modify the value of “a” for some other function that has also been passed “a” as an argument.

This is exactly what I was looking for. There's no real option for a language that allows interactive coding and it's easily embeddable. Please open source asap. You will get contributors starting with me.

I’m not taking anything away from the goals of this project but I do feel it’s worth mentioning that there actually are lots of languages that offer interactive coding and are embeddable. Eg JavaScript, Python, LISP, Perl, Lua, etc. Heck, even BASIC fits that criteria.

I do wish the author the best with this project though. Plus designing your own language is fun :)

All of them have shortcomings when one wants multi-threading. Javascript , Python and Lua implementations are single threaded. Perl is slow. LISP is ideal but the commercial distributions with these features have licesing costs of thousand $. The open source (Common Lisp) implementations have all other deficiencies: not easily embeddable and/or big image sizes and/or slow and/or with poor GC. Currently the best open source option seems to be Gambit Scheme. I am playing with it and while the author is extremely supportive some points are still a little bit rough. In the single thread world there is already a clear winner and that is Lua.

Ah yes. I understand a little more the context behind your post. And I don't disagree with the points you raised there either.

Thank you for replying :)

I actually think having different models (functional and imperative) just adds to confusion. I don't think immutability is all that useful personally, unless the language is purely functional to begin with. I'd keep it simple and stick to C as much as possible.

Why not go the other way and pass everything as a reference? After all that's what Java does, and that's how you'd pass any struct in C anyway. It's a rare case when I need to forbid the calling function to not modify an argument for whatever reason or because I don't trust it - in that case you can make a copy beforehand or use a const modifier. But in most cases I'd expect functions to modify the memory that I pass in, instead of allocating and returning new structures.

Why not have a simple `class` construct as in JavaScript? Keeping functions together in a class is very convenient and means you don't have to pass the struct as the first argument each time. That way `Array` can be a class, and would always be passed by reference. No ambiguity there, class instances are always mutable. Everyone is already familiar with it, it works.

A class method can simply map to a global C function: ``` ClassName_MethodName(*self, ...) ```

As an aside, using a syntax that people are already familiar with (and APIs!) would be great, and make something like this instantly usable. JavaScript has a fairly small core API which would be easy to emulate for example.

> Why not have a simple `class` construct as in JavaScript? Keeping functions together in a class is very convenient and means you don't have to pass the struct as the first argument each time.

This already seems to have a way to associate functions with data structures, in the same way that methods are done in Go, via a "receiver" before the function name


    type Something{}
    fn (self mut Something) method() { ... }
and calling the method with an instance

    x := Something{}

I'm thinking that it's easy to make a mistake that would prevent the optimization from happening, so I'd personally much rather be explicit about mutability than betting on having satisfied the optimizer.

This looks like a really interesting project, and I look forward to trying it out!

I have a possibly unhealthy obsession with using namedtuples in my Python and so this pattern appears frequently in my code:

    object = object._replace(foo=1)
But I usually encapsulate the _replace call in the class so it's more like:

    object = object.update_foo(1)
I personally find it can make the code easier to understand, like you say, but it seems like you're getting a lot of disagreement from the other comments.

> I personally find it can make the code easier to understand, like you say, but it seems like you're getting a lot of disagreement from the other comments.

The disagreement they're getting is not on the use of immutability and pure transformations, it's on the fantasy that a "sufficiently smart compiler" would be able to optimise this (especially non-trivial versions of this) into mutations under the cover.

Furthermore, V is apparently supposed to be a fairly low-level language, if you have to rely on the compiler to perform the optimisation, can't statically assert that it does so[0] and it fails to, that is extremely costly in both "machine" time and "programmer" time (as you'll start wrangling with the compiler's heuristics to finally get what you need).

If you want immutability, do it, but do it properly: build the runtime and datastructures[1] and escape hatches which make it cheap and efficient, don't handwave that the compiler will solve the issue for you.

[0] and if you are you may be better off just adding mutation to the language already

[1] non-trivial immutable data structures are tree-based and translate to a fair number of allocations, you really want an advanced GC

pass by copy is a traditional safe mechanism in functional languages, esp. for multithreading. It can also be made fast enough.

Keep in mind that many of these functional PLs encourage datatypes like first class linked lists that make this copy semantic very fast.

What about the case of multiple outputs? It's traditional to have functions that take other mutable arguments to store different auxiliary return values in. So, with this proposal, you couldn't do that and would have to construct random blobs to store all return values and then unpack them.

That's a solved problem, the "random blobs" is called a tuple. Or, since the language is inspired by Go, you can have bespoke multiple return values instead of a reified generic structure.

I think it would be a bit weird if fields of structs can be modified, but bare values can't. Kind of feels like the inconsistency between `Integer` and `int` in Java.

So I would say that having to explicitly mark function parameters as mutable (like in Rust) is a better approach.

    > This is an idea I got that isn't really implemented
    > in any language I know of.
Is this different than an arg declared as a const reference in C++. Or since fields are still mutable, like `final` arguments in Java?

I think yes, it's like declaring every single argument as const.

Algol68 had this property. Consider

  STRUCT (INT x, y) z = (1, 2)
Here z is a const struct. If you wanted a variable, you'd have to say

  STRUCT (INT x, y) z;
  z := (1, 2)
or equivalently

  REF STRUCT (INT x, y) z = LOC STRUCT (INT x, y) := (1, 2)
Basically a variable has a REF mode.

If you can't mutate an argument, does that mean you can't call any of the argument's methods that mutate it, either?


ever studied Limbo? look at Limbo instead of Go.

Isn't the main reason why languages like Go and Rust don't do that because it would produce a ton of memory garbage?

It will be optimized by the compiler.

Can it really always be optimised by the compiler. For example, I imagine optimising `sort(&arr)` which cannot mutate arr could be quite difficult, no?

What I mean is `a = sort(a)` will be optimized to `sort(&a)` which can mutate internally.

What about:

  x := &a
  a = sort(a)
  print(x)  // Sorted?
To detect whether something can be mutated in place will require static analysis to see if there are aliases or pointers to the data. If this is an optimization that's based on whether something is safe to mutate in-place, you'll run into the problem where performance becomes different depending on whether something can be optimized or not. For example, adding "x" makes the sort call suddenly perform worse since the compiler sees that it can't mutate "a" in-place.

This is assuming that you allow multiple aliases to the same data. The reason Rust has lifetimes and borrowing is precisely to be safe about mutation. Rust wouldn't allow sort() to modify "a" in-place in the above code.

Unless `a` is a linear value, somebody might have a reference to it, so you can't just sort it in place under the cover. The entire thing is useless if you looks like you don't have side-effects but the compiler visibly breaks this.

And you probably want to pick one of sorting in-place and out-of-place.

Yeah, as a rule of thumb I'd say "If Haskell doesn't already do this optimization, find out why."

I say "rule of thumb" and I mean it that way. Sometimes there will be Haskell-specific answers. But if your programming language has less code metadata available than Haskell but is promising more optimizations, it's worth a cross-check. I agree with you strongly in this particular case; without type system support for this specific case, it's going to be very hard to optimize away things like a sort. You start getting into the world of supercompilation and whole-program optimization, the primary problems of which for both of them is that they tend to have intractable O(...) performances... but something about them makes them seem intuitively easy. I've seen a number of people fall into that trap.

(I haven't personally, but I understand the appeal. My intuition says they shouldn't be that difficult too! But the evidence clearly says it is. Very, very clearly. We're talking about things like optimizations that are super-exponential in complexity, but seem intuitively easy.)

I assume `a` would need to be copied into the local scope of the function and the optimization would be to elide the copy after analysis shows the original a is safely consumed by the call site so it does not require persistence.

This probably means lots of aliasing restrictions or in the case where the optimization can't be done, copying could be an expensive side effect of an innocent refactoring.

I hear Swift uses something like this, though it's copy-on-write. I've not used Swift in any significant capacity. Does anyone else have experiences to share with this model?

I don't think copy-on-write will prevent a copy here, since the copy is being written to inside of sorted. I don't think the compiler is smart enough to elide the copy, either.

It definitely can and does work for similar situations in other languages. It’s fragile though as aliasing accidentally somewhere or adding other kinds of uncertainty around which values are returned makes sinking the return allocation into the caller, a required prerequisite, much more likely to fail.

A good way to imagine this is having return value optimizations give you a copy which is placed on the original which allows the work to be skipped. But that can require a whole lot of other trades around calling conventions, optimization boundaries and so on. C++ has dealt with some of this complexity recently but it’s nuances too years to sort out between standard revisions and only became required in some cases rather than legal until after compilers had plenty of time to work on it.

Yeah, I don't doubt it if Clang can do this optimization for C++, but I don't think the Swift optimizer is quite there yet since it needs to peer through many more layers of complexity.

How? In-place sorts and out-of-place sorts have different implementations.

Ada does not allow functions that alter parameters for their callers (out mode parameters).

Yes. Don’t mutate in place :thumbsup:

Pure functions in Fortran do this.

Also D with the immutable type qualifier.


> V is compiled directly to x86_64 machine code (ARM support is coming later) with no overhead, so the performance is on par with C.

Direct compilation to x86-64 machine code does not get you performance on par with C (by which I assume the author means GCC or Clang). The optimization pipelines of GCC and Clang have had decades of work put into them by some of the best compiler engineers in the world.

Since the author states that the compilation time is linear, this would seem to imply that a full suite of optimizations are not being done, since many optimizations done by GCC and Clang have nonlinear complexity. It is easy to get fast compilation if you don't perform optimizations.

> - Thread safety and guaranteed absence of data races. You no longer have to constantly ask yourself: "Is this thread safe?" Everything is! No perfomance costs either. For example, if you are using a hash map in a concurrent function, a thread safe hash map is used automatically. Otherwise a faster single thread hash map is used.

This description doesn't guarantee freedom from data races. (Java's memory model basically fits this description, for instance, except for the specific case of hash tables, which aren't built into the language.) Even if it did, the tricky part is determining what a "concurrent function" is. The obvious ways one might imagine doing this tend to fall down in the face of higher-order functions.

Yes, you are right. I had a mental note to update the description, I never expected this to be posted on HN so early :)

Just updated it:

> V is compiled directly to x86_64 machine code (ARM support is coming later). There's also an option to generate C code to support more platforms and achieve better performance by using sophisticated GCC/clang optimization.

Why not output to LLVM code instead of C?

LLVM is a huge dependency. It's very complex and slow. And it's C++ :)

I want every part of the language ecosystem to be simple so that everyone can contribute.

May be you should say "LLVM will be considered once its code is converted to V." :-)


Because that would tie you to LLVM (a moving target), and to platforms where LLVM is supported.


GCC/Clang will definitely optimize better. One way to piggyback on those is for V to spit out C code and let GCC/Clang to do the hard work to produce production mode code, while V can still do fast compilation for development mode code.

Author added the same thing to description after his comment here

My two biggest questions about V are 1) How is memory managed?, and 2) How is concurrency done?

"V has no runtime". No GC, but you don't have to manually release memory, like Rust but much easier. Sounds great. How?

And "no race conditions ever" and "everything is thread safe". You can do that with "no runtime" fairly easily if there's no goroutine-style concurrency. I didn't see any mentioned, but I could have easily missed it.

Those two aspects of the language are fundamental enough that I would certainly want to read about them near the top of any overview of the language.

> No GC, but you don't have to manually release memory, like Rust but much easier.

That description could fit Fortran 77.

You don't need destructors to release resources if you never release them.

Good question. I haven't mentioned memory management because it's not done yet. I know for sure there won't be a GC or reference counting.

I want to do something similar to Rust's approach, but much much simpler. It's not an easy task.

Right now the language handles very simple cases. Small strings are placed on the stack, local variables that are not returned are clean up automatically.

Globals are not allowed, function args can't be modified, so that helps a bit.

No GC, no reference counting, no manual memory management... Hmmm.... Not everything can be managed by RAII/SBRM... Let say I have a function which loads a complex document (like spreadsheet), does some changes, saves the document and then exits. Who will dispose this complex, dynamic document from the memory? This is the CORE question! If there is a no GC, no RC, no manual management solution to this, then I AM REALLY INTERESTED to know about it...

Yes, which is why I asked. He seems to be saying it will have the ease of GC with the performance of fully manual memory management with none of the costs of either. I don't see how radically simplifying Rust's approach can do that, but I don't have to. If he can find a partial solution that is significantly better, that will be great. I don't know if he'll succeed, but I'm rooting for him.

No GC means you can have destructors, and they can manage resources other than memory, too.

+1 for well-defined concurrency.

To me the central concurrency scheme is one of the defining features of a modern language. Go and goroutines, ES/Node and event-driven paradigms, Java and threads... That does not mean a language can't handle many types of concurrency, but it's good to be opinionated on a preferred concurrency scheme from the get-go and have native methods for dealing with intercommunication (ie. Go's channels).

I'm liking Go's built-in, lightweight goroutine approach a lot more than the other "afterthought" approaches, but I think you need a runtime to dynamically allocate the goroutines and adapt flexibly. I don't want to write one myself or link to some library. If V doesn't have some goroutine equivalent (or better) built into the language, I probably won't be persuaded by its other features. Sending bits of code off to do their various jobs concurrently with (almost) the ease of calling functions sequentially would be hard to give up.

Making a programming language specifically for the needs of one program, then developing that language and the program together, is an underestimated strategy. I believe the world would be more interesting if more people applied it—because then we'd get more qualitatively different new systems. The Sapir-Whorf hypothesis may not be something people currently believe about natural language but for sure it's true about programming: the language you program in conditions what you think, which conditions what program you write. When the two evolve together, evolution can go to new places.

This strategy is time-honored in the Lisp world, where making a language and writing a program are more intertwined, and the cost of making a language much lower, than they usually are.

The downside to this is that unless your language gets widespread adoption, you can corner your community into a bubble and drastically increase the barrier to contribution.

I think one of the issues of Gnome Project is this, they use a language specifically designed for Gnome/GTK+ and as if that was not enough, it has two different syntax, one--Vala which is sort-of-kind-of C#-like and the other being Genie which is sort of kind of python-like.

> unless your language gets widespread adoption, you can corner your community into a bubble and drastically increase the barrier to contribution.

Only if your language (and standard library) is of comparable complexity to that of widespread languages. C has more quirks than it lets on, C++ is a monstrously complex beast, Java has an enormous standard library…

If however you keep the syntax and semantics of your language simple, they can be learned in a matter of minutes. If you keep the "standard" library focused towards your application, it won't require more effort than any other regular application.

The real problem with custom languages, I think, is that very few programmers can actually write one. Most others don't even see the need, I think in part because of motivated cognition (If I delude myself into thinking I don't need something, I don't have to face the fact that I can't do it). Though the main problem is probably education: we are taught that languages are chosen, not made. As for how they are made… well, that's the realm of geniuses who have way too much time to spare.

Every programmer should have an introduction to programming languages, and we need more specialists who can whip up a DSL in a couple days. Our craft would be very different (and I think much better) if we did that.

But powerful languages of today let you build DSLs within them, so there is hardly a need to write a lexer/parser/semantics for a whole new language.

That would certainly be a good way to do it (though I reckon the limit between a mere library and an internal DSL is a bit fuzzy). Do you however have any widespread powerful language in mind? I don't know of any. Even all of them combined probably still doesn't count as "widespread".

Also, once you understand the problem space well enough, a custom syntax can be a nice bonus.

I would say that Scala can be called widespread and allows for such embedded DSLs.

Elixir allows you to build powerful DSLs within the language using macros and optional syntax. Also, if you write CSS well you end up with a DSL describing your screens. What other languages give you this power?

Terra [1]. I used it to create Regent [2] specifically because it takes this approach.

[1]: http://terralang.org/

[2]: http://regent-lang.org/

You’re affiliated with Stanford? If so, also with the developer of Terra?

I've written several DSLs in Nim[1] for emulators & VM, JITs, parallel computing, neural networks and parallel tensor operations.

[1]: https://nim-lang.org/

To add to the chorus, most ML dialects, Forth, Ruby, Self, Smalltalk, Python, Mozart/Oz, D, Julia, Lua; to a significant extent Perl, PHP, Nemerle, Octave, R, Javascript, shell, even venerable old SNOBOL.

Hence why making a new application-specific, general purpose language is probably not the greatest idea vs building a DSL within one of the languages you mentionned.

Simplicity is undervalued quality. It's far simpler from the point of view of dependency management, that if one is developing a large codebase in C++, to also have the scripting language be native to that environment, rather than to first create a DLL of the C++ codebase, then implement the FFI to it using e.g. SWIG, PINVOKE, or such, to call the C++ context from one of these "premade DLL" substrates.

It's fairly trivial to expose C++ libraries to any other language context, IFF the library is wrapped to a DLL that respects the C ABI. The reverse is not true. You need to have some message passing mechanism in that case (yes, there are several not-so-hard ways to do that but that again, is added complexity).

> Simplicity is undervalued quality.

Yes, and building a library (a DSL) in an existing powerful language that can do it (and there are many as people said in this thread) for exactly what you need, is _vastly_ simpler than making a new language toolchain (which linker? which codegen? which typing?).

I agree - choose the best tool for the job. As a counterpoint - Implementing an interpeted macro-less lisp is a few thousands of lines of code in C or C++. So if the goal is just to bootstrap some language, the effort is more about typing than design or innovation (just copy the interpreter in chapter 4.1 of Structure and interpretation of computer programs).

A stack based language like forth would be even less work.

OOP is based, partly, around the idea of 'mini algebras' (roughly speaking, public interfaces), which are domain-specific languages ( http://xahlee.info/comp/Alan_Kay_on_object_oriented_programi... )

Functional programming, especially Haskell, tends to use the term 'embedded domain-specific language' ( http://wiki.c2.com/?EmbeddedDomainSpecificLanguage )

Lisps tend to call them DSLs or sometimes "mini languages"

Forth calls them "vocabularies"

I thought Lisps called them "dialects" (same as in Rebol / Red).

Couple of other terms i've seen...

* Pidgin - An attempt to get away from DSL ambiguity!

* Slang - This is what Perl6 calls them

> I thought Lisps called them "dialects"

(Lisp) "dialect" seems to refer to a particular implementation/variant of Lisp, e.g. CommonLisp, Scheme, Racket, Clojure, MacLisp, NewLisp, etc. rather than a domain-specific language built in a Lisp.

(e.g. searching for "lisp dialect" gives pages like https://www.slant.co/topics/5928/~lisp-dialects )

What other languages give you this power?

Rebol / Red

NB. Here's an old Wikipedia link listing (some) languages that were good for creating dialects (DSLs) - https://web.archive.org/web/20140708161345/http://en.wikiped...

Red seems deeply underappreciated in my view. Any thoughts on why?

When it last showed up here in HN, iI was very intrigued by it. However, it promises a lot of stuff I think many here have heard other projects make. It seems very ambitious. Documentation isn't the best, which makes it harder (especially due to different syntax). Overall though, I am still a little skeptical it can keep it's promises. (With that said, I hope it does: it looks like it has a lot of potential)

It's still an alpha language. I would never deploy something in production with it.

Red is still at alpha stage of development. I suspect once it reaches beta level then it main garner more appreciation.

The Tool Command Language and Toolkit (Tcl/Tk).

Scala. For better or worse.

Having wrestled with the ~ and {} spaghetti of a Spray routing module... for worse

On the other hand the parser combinator library does it pretty well. Scala gives you enough rope to make a raft.

All Lisps and schemes?

D does that quite routinely too.

Genie seems super dead. Also, Vala is not that important for Gnome. Many apps are written in C, JavaScript, Python and Rust.

From the website:

> There's also an option to generate C code to support more platforms and achieve [(possibly)] better performance

Another reason x-compilation to C matters: providing an escape in case the community thins out and no further updates are made on the V lang.

That depends heavily on what the transpiled C looks like, machine-generated code can be pretty human-unreadable. In particular you generally lose all comments, you can end up with autogenerated variable and function names, huge functions with completely unidiomatic C code etc...

For instance here's what the chicken scheme transpiler outputs when told to compile a simple '(print "Hello, world!")': https://gist.github.com/simias/c96408a76eb7288fbd0d975f5dc99...

Good luck maintaining that...

I agree that modifying Chicken's output like you would handwritten C code would be an exercise in madness. But it's possible to design a language to have a nice transpilation story, for example PureScript deliberately puts restrictions on the names of types and operators so that they have a sane name in JS. Of course there's still a likely style mismatch, but what can you do?

I suspect you already know this, but for the benefit of others.

And looking at V, which is the story here, it should not be too obfuscated.

I tried to make resulting C code very readable and simple:

fn main() { println('hello world') } ==>

int main() { printf("hello world\n"); return 0 }

Erlang came about like that, starting from Prolog with a telecoms program and new language in mind, and developing both together. Prolog (I learnt recently) is surprisingly Lispy, e.g. data and program in the same form and easy for the program to change, everything in lists, and very FP - recursion for loops, non-mutating variables etc.

I don't how many other languages started like that. PHP for one. Probably a lot.



PHP is relatively infamous for being badly designed and hard to maintain (something I did professionally for years, so I'm speaking from experience).

Erlang is still a niche.

Both are examples of confirmation bias. There is a massive graveyard of one-man languages that no one used and then died.

That was true up to PHP5, where a Java-like, saner subset of the language with stronger typing became available.

The genius of PHP was in the code delivery model and the way you could intermix HTML and code in a single page to be served by Apache.

That democratized Web programming and programming at large. The rest of the language was bad, but it didn't matter for success (see "worse is better", where "better" means fitter for the market).

If you’re saying that PHP was a useful, Turing-complete templating language, I agree. I don’t think it was well-suited for large code bases with lots of business logic, and these days I’m horrified by the idea of a Turing-complete templating language in the first place.

Turing completeness doesn't bother me as much as all the security issues that existed by default (some of which are still present to this day).

I wish programming languages made a type-level distinction between strings that are and aren't tainted by user input. That would make it so much harder to accidentally introduce injection vectors.

I totally agree that PHP was not well suited for writing large and complex programs, but its ubiquitous availability as a deployment platform made it an interesting target anyway.

The easy deployment also made it easy for people to learn it by themselves, and consequently there was a large pool of programmers who knew PHP, making it an interesting language from a hiring point of view.

And that's how we ended up with piles of crappy, proprietary and OSS PHP code in the wild. The quality of the language is only a minor factor its success.

> The easy deployment...

This is an important point that is still underappreciated even today with all of our hindsight.

Confirmation of what?

I was just trying to think of (well-known) languages written that way. I guess there've been just as many dead languages not written with a particular program in mind.

What I took away from this, is “everyone should learn a Lisp dialect ASAP”.

Not necessarily. It's easier in Lisp, and a decades-long tradition, but Lisp is not even the only language of which that's true—there's also Forth. And there are many opportunities to apply this approach beyond those two. The more the better, because each qualitatively different starting point will lead to qualitatively different systems.

The software ecosystem is not well served by everyone being focused on the same few familiar programming approaches. Sure there are economies of scale—better tooling, programmer fungibility—but there is less intellectual diversity. More people should realize the tradeoffs here: yes you lose a lot when you start making your own language, but you also gain a lot. The program you're trying to write becomes more writeable. If that doesn't sound significant, it's because we're so conditioned to think the other way. There's another advantage, too: it's deeply intellectually rewarding. That provides staying power to work on significant projects over the long haul and is a solid motivation to do something many people think is crazy.

Really you can write a DSL in almost any language. I've worked with Java and TypeScript Selenium testing suites that were so project specific with everything abstracted out to methods so much you could barely tell what language you were writing, just that it was some C-descendent

You can, but having done both I came to the conclusion that they are far from the same.

> The software ecosystem is not well served by everyone being focused on the same few familiar programming approaches. Sure there are economies of scale—better tooling, programmer fungibility—but there is less intellectual diversity.

Then it seems that the problem has more to do with political economy than with programmer culture. Capital and managers want programmers to be cogs, not artisans.

Can’t expect programmers to change this political problem, either. Especially considering that the nerdier the technologist, the less political he or she is.

Writing a simple TDD kata in OMeta was really mind opening.

That and Maude is another fascinating language. Pure term re-writing and explicit definitions of all data structures seems almost like a higher level of abstraction or programming.

[1] - https://en.m.wikipedia.org/wiki/Maude_system

I think for specialized projects it makes sense and I can think off the top of my head of two: Godot and Processing. Godot is a game engine that is highly extensible, and has 1 main scripting language reminiscent of Python and another that's for lower level patching of other languages into the engine.

Processing on the other hand is more of a way for visual people to get into programming and do visual representations of data or art. A matter of fact the demo for V / Volt made me think of Processing and finding out that it can be cross platform made me wish Processing would produce native binaries like this that compile down to very small executables whilst still maintaining the same syntax (which is very Java-like, but not entirely necessary to reproduce a full on Java-like language for this purpose).

I totally agree, for certain projects it makes a lot of sense to build your own language. I wish those mainstream engines would stop embedding C# (and I love C#) and make something much more creative. There's the not-so-mainstream engines out there too that do implement their own languages like GameMaker and friends. DarkBasic also comes to mind.

Yes, co-development of languages with applications of language is important.

Not sure about “one”, though, Usually you want more than one example before you generalize to a library -> framework -> language.

As one of my professors used to say (a lot):

“Man kann nur von etwas abstrahieren das man verstanden hat”

You can only abstract from something you have understood. And premature abstraction is probably one of our biggest problems.

What a terrible idea. I'm surprised I need to spell out that the benefit of learning a language is that you can actually use it, and the effort-reward ratio is far better if you can reuse it widely. Plus a language is far more than its syntax and compiler - its the ecosystem that makes a language productive. The last thing we need is for an explosion of half-implemented, poorly supported toy languages for individual programs.

I disagree. A cambrian explosion of 'toy' languages is exactly what we need so that the few that become popular reach the maturity you desire.

I have this itch that there is an explicit slot in our native language infrastructure for a scripting language that is a bit less clunky than Lua, not so huge as Python, and more ClojureScript like than the other embedded lispies out there, that is trivial to embed, has a fantastic interface to C++ and is utterly pragmatic yet elegant.

A small subset of Python would probably do...

You might be interested in Starlark [sic], https://docs.bazel.build/versions/master/skylark/language.ht...

Cython is great, and is a subset for native compile/interop that is often overlooked.

I agree on principle. However, in the comment above I was thinking about a scripting language that is trivial to bolt on to an existing native program, by including just a few header and source files. Like Lua, Duktape, etc. Or how sqlite is distributed in amalgamated form.

I agree but at least some of the ecosystem concerns can be mitigated if you choose the right technologies to build your language. For example Eclipse Xtext lets you design languages that can easily interact with Java code and tooling, and it provides a lot tooling support like build integration in the IDE, syntax highlighting and code completions without much effort. If you keep the scope of the language small and treat it like any other module in your projects than its not hard to maintain.

Unless you are the competitors.

Meta-circular composition forms a powerful design technique with more than just algorithms and code. The philosophy of affordances, fault prevention, fault recovery. Self similarity helps with sourcing, alignment, jigs, etc. Many design patterns in code have analogs in other forms of engineering.

Creating a language is in an orthogonal dimension to the plane that Sapir-Whorf lives on. The realization that you can create and utilize constructed languages, not just the given languages, to solve problems. A computer programming language is a meta-tool, one uses the language to shape and reify ideas. So a computer programmer is a meta-tool user. Someone who creates computer languages is a meta-tool creator. And they are designing an idea that is congruent mathematically, mechanically (can we compute it) and mentally for a human. Maybe my version of the Turing test is, "design me a computer language to represent and solve X"

> [...] developing that language and the program together, is an underestimated strategy. I believe the world would be more interesting if more people applied it [...]

I find this applies to most development, once you muddy the distinction between program and language you can see how the concept applies to other things - that is, it's a continuum, and artificially and prematurely locking down the design and implementation of a lower level piece before exploring the landscape can hurt... though determining what is "premature" and what is not is of course pure intuition.

> Making a programming language specifically for the needs of one program, then developing that language and the program together, is an underestimated strategy. I believe the world would be more interesting if more people applied it—because then we'd get more qualitatively different new systems.

One example of this is video game designer and programmer Jonathan Blow, best known for creating the game Braid.



He's working on a language for writing games.





Here is one of many videos of him using the language:


A couple of moments in that video which I would like to highlight:

At 11:15, responding to a question from chat:

> Is there a specific reason the game is kind of in top-down 3D?

> Because that is what helps the game mechanics that we need for this game. If it was in first-person (laughs)... it would be kind of amusing... maybe we should do that (smiles) that'd be fun.

At 14:05, responding to another question from chat:

> Do I find the language scaling, well, now that I am dealing with a more complex program?

> Yes. I've been dealing with a program this complicated for like five or six months so this is nothing new.

At 14:15, responding to another question from chat:

> Do I have my own shader language?

> No, right now we are using OpenGL and [unintelligible] GLSL.

The answer to this third question indicates to me that he is focusing on getting something that works better for him than pre-existing alternatives without getting bogged down too much that every side of his language needs to be perfect right off the bat.

In other words, he is balancing the time to perfect his language against the time he wants to spend making games rather than spending 100% of his time working on the tooling and not getting to write any games.

And while we're on the subject of people making their own languages to make games, see also Andy Gavin.

He made Game Oriented Object Lisp (GOOL) for himself and the other people at Naughty Dog to use when making Crash Bandicoot for the original PlayStation.

He later made another language, Game Oriented Assembly Lisp (GOAL), for the Naughty Dog game Jak and Daxter: The Precursor Legacy for the PS2.


He has written a series of articles about the making of Crash Bandicoot on his website. Well worth a read!


He doesn’t have a programming language. He has a series of YT videos.

Until he open sources the code (or at least lets someone else try it/look at it) discussing his “language” is giving him too much credit.

You talk as if this was an example of a successful strategy of writing a general purpose language for one program.

However Jon Blow's successful games, Braid and the Witness, were not written with that new language.

I am just saying it seems like a good strategy. The mention of Braid was just to give some context about who I was talking about since I think at least more people have heard of Braid than the amount of people who knows who he is if you don’t state that he made Braid.

Can you quote any project were this strategy was commercially successful?

UNIX/C, for one :-)

> I believe the world would be more interesting if more people applied it—because then we'd get more qualitatively different new systems.

Interesting, yes. More productive, not sure.

It's basically quality versus quantity, or in the words of Stalin: "quantity has a quality all its own".

So far scaling horizontally (more people working together) seems to fare far better than faring vertically (one or a few people being way more productive alone or in a small group).

Awesome, I love to see new programming languages in action. This is great. Some thoughts.

First, ignore negativity and focus on getting constructive feedback. One of my tiny regrets is that I abandoned one of my projects due partially to negative energy. Many years ago, one of my projects was shared here ( https://news.ycombinator.com/item?id=226480 ), and the feedback was kind of a buzzkill (especially since I wasn't the one sharing it).

Second, think about the growth you want. While I could ignore the buzzkill and keep the faith, I used my language to put a real product out into the world. The crazy thing is that I got it working and working very well, but when it came to hire. It was a cluster fuck. I should have spent a bunch more time on documentation and examples, but I had other concerns that were higher priority. I ultimately had to abandon the whole thing, and I just rewrote everything in C# and used Mono. It was painful, but the company was able to grow faster since the tools were somewhat standard and a plethora examples for the new hires.

When I look back, I was onto something. If I had kept the faith and pushed through, then I would have created something very similar to HHVM which Facebook uses. My strategy back then was to create a less awful language, improve it, then port the platform bits to a better ecosystem and preserve the "business logic".

My core advice with the programming language side of the house is to find a partner for you to lead/follow with shared values. Make it open source as soon as possible, don't wait.

Found this from the Slack thread: https://news.ycombinator.com/item?id=19081896

Is this language going to be open-source? It seems incredibly impressive, and 100% overlapping with what I'm trying to do with Zig.

To Andy:

I thought it was a new name for Zig seeing your name and the description in the link. ;) It actually looks about too good to be true, especially that it can handle any C++ to V conversion. I was pushing people to check out languages like ZL exactly to rid us of C++ without throwing away legacy code. If it can do that and fast compiles, I can't wait to read the full write-up later on.

Btw, I encourage you to keep at Zig for diversity in systems, language space. Plus, macros. I tell people to avoid them by default for more maintainable code. However, there's times where it's better to have them than not have them. I was happy to see D, Rust, and Julia do macros. Zig and V should have them, too, for max productivity.

To amedvednikov:

1. The name. Although Kesterel had a V language, that was long time ago. You're not stepping on anything. I just encourage you to do one people can spell and pronounce easily that isn't already taken. That will make both search results and adoption a little better.

2. Macros. Like I said above. I saw you mention Go which intentionally tries to keep a standardized language for maintainability and easy compilation. I get that. You could add a warning to the main page that macros are available but discouraged for most situations for those reasons. "Use them only when the cost is worth it." Can just do two passes: one for macros, one for regular code. Your incremental compilation should knock out most of what little slowdown there is.

I'm against macros. I've done a lot of research on this topic.

One of the main goals is simplicity and maintainability. I want people to be able to jump into any code base (including stdlib and compiler) and understand what's going on. Macros don't help with that.

I went through like 10 names and ALL of them were taken. There are a lot of programming languages out there :)

I don't know of any language with macros that have proper IDEs that do more than syntax highlighting. One of the reasons languages like Dart, Java or C# have such amazing IDE capabilities w.r.t. refactoring is that they don't generate half of their code during runtime...

I agree. Macros only help the programmer. They significantly decrease code readability imo. Code readability is more about how easy it is to understand what is going on than about how much stuff there is to read.

Max productivity to the library implementor not to the integrator or maintainer. The number of hours I've spent inlining gross macros so I can debug them has poisoned me against all but the simplest application of them.

I said the same thing in my comment. You get max productivity with selective use. Nine times out of 10 you don't need them. Overuse of them led to LISP code being hard to read like you said. We don't need to repeat mistakes of history. So, I said add them with a warning to minimize them for maintainability if language is trying to be like Go.

On the other hand, they should be most of the code if taking a DSL- or MOP-like approach. The people using those will be very familiar with the higher-level language. There's a solid, lower-level language underneath for when the VHLL's don't work out. The macros help there.

> Overuse of them led to LISP code being hard to read like you said

I don't think that's the case.

That poster your comment goes to claimed that there are types of macros which make debugging harder - not code reading.

Debugging code which makes use of macros is more complicated than code without. There is no doubt about it. One part of it is that debugging happens in a different place -> code transformations with side-effects are running in a compiler or with interpreters at runtime.

You're right. I glazed right over that. No offense intended.

First, it's Lisp not LISP. Using "LISP" immediately flags you as someone with a superficial (if at all there) understanding of the language.

Second, unsubstantiated proclamations like "Overuse of them led to LISP code being hard to read" reinforce the previous point. Could you provide a clear reference where Lisp macros are considered "mistakes of history"? Clear references where overuse of Lisp macros turned out to be a problem?

I'm curious if you've ever used a Lisp development environment with facilities such as interactive macroexpanders or if you're just assuming things based on your (incomplete, suspect) understanding of the domain.

>First, it's Lisp not LISP. Using "LISP" immediately flags you as someone with a superficial (if at all there) understanding of the language.

Actually both versions are valid.

You seem to not know historical information about Lisp/LISP. While Common Lisp is spelled "Lisp" and more modern use is Lisp, historically LISP has been prevalent (and tons of Lisp dialects prefer the capitalized version, e.g. like "fooLISP" or "barLISP").

Second, you are concerned with superficial details people don't and should not care about. We're programmers, we care about the code and what you can do with it, not about whether some language is "properly" spelled in caps of mixed case.

Third, you are rude, which is worse than both of the above.

"historically LISP has been prevalent"

I learned about it from old books, often on AI, I could scrounge up when I didn't have the Internet or a computer. LISP as in LISt Processor. An acronym. Due to broken memory, I sometimes forget which term to use on stuff that's faded away. I end up about randomly using LISP or Lisp unless its Common Lisp where I usually see "Lisp" in write-ups.

So, good guess.

This seems unnecessarily harsh.

Capitalization isn't a deep signal.

There are lIsPs like Clojure that argue (> data functions macros). Although I mostly disagree (for example I love Racket macros), I understand and appreciate the sentiment. I have heard it from other people who have worked with a variety of LiSp code bases over the years. TL;DR: Any form of non-trivial DSL needs supporting materials like documentation and a simple, clear design.

Although it is nice to be able to expand macros, fully-expanded macro-generating macros are clear in approximately the same way as assembly language. It is impressive if you can navigate that, but even more impressive if can manage not to need to do so.

Clojure argues that (> data functions macros), but it still has macros and accepts that they are not only useful, but sometimes necessary. Clojure's core.async would have had to be built into the compiler, if it wasn't for macros. Just because it prefers data to functions and functions to macros, doesn't mean that it doesn't recognise the importance or usefulness of all three.

> lIsPs like Clojure that argue (> data functions macros).

That's not really Lisp related. It's more like how the community likes to see code being written.

For example I would regularly make use of macros to provide more declarative syntax various programming concepts.

There are Lisp dialects which are light on macros and some which are using macros a lot more. For example the base language of Common Lisp already makes use of many macros by providing them to the user as part of the language (from DEFUN, DEFCLASS, ... INCF, upto LOOP).

It's hard to avoid naming conflict, but there is:

V programming language https://www.vcode.org/

> Zig and V should have them, too, for max productivity.

Have you read my articles about macros and what Zig does instead?


I would be curious to hear your thoughts with this context.

Seems like you are going for a two-staged language with homogeneous metaprogramming. As opposed to heterogeneous metaprogramming (macros). I like it, I must say.

> Btw, I encourage you to keep at Zig for diversity in systems, language space.

I don’t think that “Basically like C but a tiny bit better” deserves to be associated with the word “diversity”.

> 100% overlapping with what I'm trying to do with Zig.

I don't see any mention of meta-programming for V, which seems to be a big emphasis for Zig? This seems like a massive feature that at least I'd care about. I haven't personally had the opportunity to try Zig, but I'm routing for you, so I hope you keep going with the project.


I'm pretty sure none of V will deliver on its promises. Wouldn't be surprised if nothing came out of the announcement at all. Author seems to have a history of big claims and vanishing shortly after. Also details on implementation specifics here in the comments don't make him sound like someone who has an idea of the internals of a compiler or programming languages.

Can you give some citations / references?

FTR, I found his (former?) eul\.im which now redirects to some fishy website.

That's what I referred to. His comment history looks like he is advertising his chat apps all the time. One comment pointed out, that eul.im was 4MB in size but loaded a browser runtime upon initial start, also he never opened the source. I'm not saying that he is a scam or anything but that product page makes some very huge claims and has little to no proof for V's existence. So I think it is too soon to celebrate the new GOAT in the programming language game.

The claim that the language is guaranteed thread safe without data races is also unsubstantiated. Avoiding data races, and more importantly, dead and live-lock, are incredibly difficult problems to solve.

V looks interesting, but I'll wait and see before I jump onboard.

Sadly, that's the conclusion I came too after reading more details/responses.

Lots of handwaving, things mentioned as language features on the webpage and then revealed to not be present (in responses), and lack of specifics...

Very interesting. What I'm curious about is how this language is compiled, the implication seems to be that it gets translated to C/C++ which seems to overlap a lot with what we are doing with Nim :)

It seems to be an option to compile to C for platform support, but the page also says:

> V is compiled directly to x86_64 machine code (ARM support is coming later) with no overhead, so the performance is on par with C.

Which is interesting. There is a lot more information required still but direct may also imply that it's doing the actual instruction scheduling too rather than relying on LLVM. I'm looking forward to hearing more.

LLVM is not used.

Originally it was compiled to C, now there's also an option to generate machine code directly which improved compilation time by a factor of 10.

Would you mind explaining why compiling to machine code was faster than to C?

From https://volt.ws/lang:

  > V compiler is written in V. The language will be open-sourced later in 2019.

I think /lang may be out of date from the home page. Here's what's on the index:

    > Is Volt open-source?
    > Not at the moment. Due to several reasons, right now the development model is similar to that of Sublime Text. The app is going to be open-sourced in 2021, so you don't have to worry about it being abandoned.
Edit: Just in case people miss the responses, my apologies, this quote may be referring to Volt the app, not the language (V).

That refers to the Volt app, not the V language. According to the website, the V language will be open sourced in 2019, and Volt (the app) will be open sourced in 2021.

Ahh great point, totally overlooked that.

It's at least plausible they could open source the language without open sourcing the app at the same time. So they're not necessarily in contradiction.

Correct, the language will be open sourced this year.

I also found Volt in that thread and have been reading about V ever since. Thanks for posting!

Wow, I like the selling points:

* Strong modular system and built in testing.

* Global state is not allowed.

* There's no null and everything is automatically initialized to empty values. No more null reference crashes.

* Variables are immutable by default and functions are partially pure: function arguments are always immutable, only method's receiver can be changed.

* Thread safety and guaranteed absence of data races. You no longer have to constantly ask yourself: "Is this thread safe?" Everything is! No perfomance costs either. For example, if you are using a hash map in a concurrent function, a thread safe hash map is used automatically. Otherwise a faster single thread hash map is used.

* Strict automatic code formatting. It goes further than gofmt and even has a set of rules for empty lines to ensure truly one coding style.

Especially eye catching is the 2 mode of every data structure. Switch to thread-safe if there are concurrent access.

For no null is he saying that everything is just initialized to a default value (int=0, str=“”, etc)? I’m thrown off by, “everything is initialized to empty values” because I don’t see how empty and null are different.

Right, the only syntax for variable declaration is a := val, so you are forced to initialize variables.

The difference is all values are valid. In C, you can't know that a `char *` is safe to dereference.

Garbage but valid values are, in my opinion, much harder errors to catch because they can silently corrupt data, than a simple crash/null pointer exception.

I'm not a fan of null (option types seem better -- which V does say it has), but defaulting to an "empty" value isn't the answer IMHO as it makes it much harder to debug by obscuring that there's a problem at all. You may not realise that the 0 is actually an "empty" and not a valid 0 until you realise all of your calculations are wrong, months later.

“No variable shadowing” might sound great when you’re thinking about that one time someone confused you for three seconds with the addition of a new inner variable with the same name as an outer variable. But once you realize that it equally means forbidding the addition of a new outer variable with the same name as an inner variable (even though those inner variables are supposed to be implementation details)—and, as a corollary, you can never add new builtins to the language without breaking backwards compatibility—you’ll realize that most languages allow shadowing for a reason.

> you can never add new builtins to the language without breaking backwards compatibility

Adding any keyword is by definition breaking backwards compatibility, so I'd say this is the correct behavior.

I think you're confusing "keyword" with "reserved word". A keyword has a special meaning in certain circumstances, while a reserved word cannot be used as an identifier. In some languages, those are the same thing.

For example, "goto" is a reserved word in Java, while not a keyword. Inversely, in Fortran, keywords are not reserved names, so `if if then then else else` is valid. C# has "keywords" and "contextual keywords", both categories being keywords, but only the former being reserved names.

Regardless, a builtin might be referring to a function that's included as part of the base language - like make() in Golang or zip() in Python.

Eh, right. But correcting for that I still hold the same view that adding new builtins should be considered breaking backwards compatibility.

EDIT: Since someone has gone through the trouble of downvoting this viewpoint, I would like an explanation as to why I'm wrong here. I cannot imagine a scenario in which doing so would not likely break existing code.

If you allow shadowing then surely the local definition of the name will take precedence over the new name introduced in the stdlib (or wherever), and thus the program will keep behaving how it did before the new builtin was introduced.

I think I must be misreading you, because it sounds like you're arguing for the idea that adding new names to an API should be considered a breaking change? I'll accept that maybe reflection will catch those changes, but with that exception why would any code even notice?

Let's say python is adding a new builtin function, like "filter".

I already have a "filter" function defined in my program, so it will shadow the builtin function, and my program will still work as intended.

Is this a real language, or am I dreaming this?

Those are exactly the ideas I had for what would make a perfect language! (Assuming they're implemented properly of course).

Simple features. Immutable and pure by default (but not dogmatically so). Fast compile. Hot reload. Automatic C interop. Fast-ish. Built in niceties like hashmaps, strings, and vectors (niceties compared to bare C). Receivers so you don't have to do the song and dance you do in C to tie structs and functions. No header files!

Go came close, but no cigar. Rust added the whole kitchen sink and loads of accidental complexity. Anxious to see how this fares...

You might want to take a peek at the Nim language as well; it has all of the above and is quite mature.

Hit reload though?

Have you seen Zig?

Yes, Zig is another one I'm excited about.

It's interesting that the roadmap of volt has been saying v1.0 is just around the corner for the past half year. The other roadmap items also don't change much.



It would be great if the roadmap contained realistic items. Once a user is burned by an unmet expectation he won't believe anything else on the website.

Yes, I've done a terrible job with estimations and for 9 months I lived in "release tomorrow" mode.

I made lots of mistakes that caused the delay, I'll post a detailed blog about it.

Should have sticked to "it's ready when it's ready".

Ironically this time it it really is going to be released tomorrow (Feb 7).

please don't take this the wrong way but I'm almost more excited to read blog posts about your process and what you've learned than I am for the eventual product, or language used to create the product (though I am excited for both of those!). reading experiences people had trying crazy new stuff is more interesting than results of trying crazy new stuff imo

When you figure out how to do a good job with estimations that can be another post, because I still haven't figured that out. It's way easier to reason about programming language semantics than to guess how long a reference implementation will take.

macOS version has just been released. Windows version will be released later today.

Yesterday, I was in my Applications folder and deleted an old version with an "ahh too bad this never lived". Now, a new story. Thanks, can't wait to read more about it!!

I think doing away with global variables is not a very good idea. While using globals is usually a bad idea, there are many instances where globals are appropriate (at least for languages supporting mutability). People who say they never use globals usually do use globals and are just trying to convince themselves they are not because they heard they were bad from somebody.

The entire Spring framework is IMO an elaborate construction built so that engineers could use global variables without their managers finding out. There is little to no difference between carefully using global variables and Spring dependency injection except syntax.

The best solution I have ever seen to global variables is definitely parameterize with Racket (https://docs.racket-lang.org/guide/parameterize.html). I don't think Racket was the first language to come up with this, but it was the first one I am aware of. The basic idea is that you define some global with a default value. However, you can call parameterize to change the value for the duration of some input function. It is made thread safe by using thread local memory. It then resets the parameter back to the default value at the end of the function.

On the other end of the spectrum, I think Rust also has a very good implementation of globals. It will let you use global variables, but you have to declare it as mutable, use some form of locking, or use an UnsafeCell. Additionally, you have to mark your code as unsafe any time you try to read or change this global variable.

> There is little to no difference between carefully using global variables and Spring dependency injection except syntax.

Until you need to write unit tests.

I find myself sometimes wishing that Rust genuinely didn’t have globals. There are some things where it’d cause pain, and some places where I’m not sure what you’d replace it with (lazy_static, for example), but I find that Rust actually makes it too easy to do globals and thus have unexpected side-effects (e.g. command line arguments, environment variables, working directory—I would genuinely prefer these things to be passed to main for me to use at my discretion), although in most places the culture is against using them, which saves it from being too much of a problem. Yet still, strict functional purity can open up some delightful optimisations and avoid various bugs, just like how putting error handling into the type signatures helps clarify things and avoid all kinds of bugs. I’d like to see what Rust would be like with even fewer, or no, globals.

> The best solution I have ever seen to global variables is definitely parameterize with Racket (https://docs.racket-lang.org/guide/parameterize.html). I don't think Racket was the first language to come up with this, but it was the first one I am aware of. The basic idea is that you define some global with a default value. However, you can call parameterize to change the value for the duration of some input function.

Isn't that basically Perl's `local`?

It is, but Lisp did have it first, even if not through Racket.

Yes, it's called dynamic scoping, and for a long time it wasn't believed that the other option (what we call lexical scoping today) could even be implemented efficiently.


V consts are a bit more powerful. For example you can define structs:

const red = gx.Color{255, 0, 0}

or even call simple functions that only return values

const red = gx.rgb(255, 0, 0)

> I don't think Racket was the first language to come up with this.

Racket's parameters are just dynamically scoped variables. Most Lisps have them. The older Lisps actually predate lexically scoped variables and had dynamically scoped variables exclusively! Emacs Lisp is not even that old and only had dynamic scope until relatively recently.

> Racket's parameters are just dynamically scoped variables. More or less. Racket's parameters also work with multiple threads and cooperates with continuations.

In Scheme it is common to see `fluid-let` to handle classic dynamically scoped variables.

I think most Common Lisp implementations that provide threads, make dynamically-scoped variables (called "special variables" in Common Lisp) thread-local.

I like to know that there are still developers for whom the size of an application is an important issue.

Going a bit more abstract, it's a good idea to do this with mental models, too--the programming languages of the mind. Many of us are heavily burdened in our problem-solving long before we fire up our text editor, because our mental model for exactly what it is we are doing is extremely sloppy and rich with unnecessary dependencies.

A lot of people continually tinker with mental models that are into the Gigabyte-equivalent range in terms of all they intend and promise to do. One example here on HN might be the "startup" model. What "it" is seems pretty fluid at this point and in various discussions it gets mashed and molded to fit this concern or that one. Better models will come along that will solve problems nobody can yet put their finger on. (I'm speaking in the abstract here, but I've experienced and worked heavily on this kind of model-change and it can be very valuable.)

What typically happens is, someone comes along and isolates an issue which promises high leverage or high controversy or both, brings a set of problems into really sharp relief and remains in the needed context without the burden of supporting and interlinking with every other context out there, and voila--a powerful solution emerges in a very efficient way. Pretty soon everybody who needed a [startup] mindset now needs a [successor-lens] mindset. And not just in name--it's clear that this can really help. It's good stuff.

It's really just more of what we call "technology" and is observable in the same sorts of curves, but again, there's a model that's overburdened--the technology of the mind still overlaps with and rubs against what we consider "true" technology of the "useful arts" sort. As a civilization we suffer, mostly unknowingly, under the burden of yesterday's thinking about how things fit or don't fit into which categories.

and thats why im gonna patreon the shit out of him. we need more heroes like this.

The SS-ci5er is here to take the call.

This is "the deal"...

There are many computer environments beyond the desktop and cloud servers. Arguably most computer environments.

But to reduce it: imagine an O&G pipeline controller that stupidly did something bigger than QNX & C. That will be pumping oil and gas for 30 years. Online upgrades, until some young turk blows out the library size. Oil spill with a blown line, and New Jersey explodes.

Those systems aren't running desktop applications though.

Android developers. APK size is still a big problem.

I always hated how bloated Android / Java are.

Now that I have experience with translating C/C++, it would be really cool to translate existing Android Java apps to V. This would talk more than a year probably...

100+ MB down to 5 MB is such a nice win, it's amazing that the author even bothered to continue from 5 MB down to 100 KB. Very impressive!

Sounds too good to be true. If this is released it'll be serious competition to Rust, Nim, Zig, etc. Lets hope for the best. There's just so many amazing features. There's even a graphics library in it.

I'm very excited about it. I've been working on this and Volt full time for a year.

Very good to know! Keep going strong !

How exactly is this a serious competitor to Rust? The guarantees volt is (claiming) to make are nothing compared to Rust. I'm not saying this is bad (Rust is pretty rigorous and hard to code in), but I don't really think they are comparable.

It does seem similar to Nim and Zig, I just think Rust is in a different category from almost all other languages all together.

To the problem you were originally trying to solve, why not just use Rust? Go and C are really about as related as Java and C. Rust would have met all your requirements, and has a lot of features you added to V to begin with.

Slow compilation. Complexity.

Why not Pascal?

That's strange, I find Rust to be much less complex than working with C++ or C. It keeps track of all the tough bits for me, and it has all of the nice expressive stuff from Haskell. With Go I kept running into cases where the language simply had no feature to save me from multiplicative complexity in my code base.

I haven't had any problems with build times thus far, how big is your project?

It's not big. But I'm developing V so that everyone can create large applications with very fast compilation times. I'm getting x120 improvement for DOOM 3, and I think it can be up to 400 times faster for more complex C++ projects using more templates, boost, etc.

Of course I didn't need a new language to make my project compile faster :) I just wanted a simpler C, and I had some experience with writing languages (I wrote 2 languages at school/uni).

Now I'm actually more excited about V than the original product I created it for :)

This is the part that astonishes me the most. How can you compile so fast?

He is a russian- these guys are crazy when it comes to performance lol.

I had been looking for this exact project, but I couldn't remember its name for the longest time. But I remember that home page exactly. Doing a bit of seraching it turns out volt.ws appears to be a rebranding of a previously posted [1] Eul (eul.im), posted by alex-e (whom I assume is its author). Either way, volt.ws and the associated V language sound quite interesting, I look forward to hearing more about this in the future.

[1] https://news.ycombinator.com/item?id=14778263

As an (anonymous) programming language designer, a few bits of feedback.

First, nice concept, but without open code, it might as well not exist, and without open specification, it might as well be yours alone, like one of Tolkien's languages. Closed languages wither and die, and yours seems well onto that path.

Second, what makes V compelling to you appears to be completely uninteresting to me in terms of language design. It might as well compile from V to Go; I can't see why not!

Whenever a language designer appeals to simplicity, they are usually appealing to whatever makes it possible for them to be productive, and they are usually missing that the productivity is personal because the designer is the one who builds the language. The GL demo seems to be a great example of this sort of situation.

I hope that you publish your work so that we may properly critique it.

Edit: Here is another language designer who is not me saying "closed languages die" (https://blog.golang.org/open-source). I think that, until we actually have a compiler for V (or whatever it is hopefully renamed to before release) in our hands, we ought to be extremely careful about trusting that any of this exists. It is all too common in PLT/PLD for somebody to come in with bold claims, outrageous mockups, and zero toolchain. I addressed what I saw, which is yet another compiles-to-Go hobby language. To become more than that requires a committed community and a common repository of open code, and the author appears to have only the former.

> Closed languages wither and die, and yours seems well onto that path.

This is unnecessary harsh. The author has already said it will be open sourced later. I can understand the reasons to not open source now. Managing an open source project is no small work.

Second, notwithstanding V's slim feature set, it's already more successful than 99% of language design attempts out there in that it ships. It certainly succeeds in letting the author to build his other projects faster and easier. It fulfills the author's own needs. I'm sure Perl and Python started that way.

look at the authors comments in this thread, he's been known to miss deadlines before. open sourcing is probably gonna be really really delayed

This is his personal project. He really does not owe anyone anything, deadline or no deadline. Open source "users" have been getting really entitled these days.

As per contractual obligation of the author you can withhold pay and demand damages... oh wait.

This is engineering we're talking about. Deadlines and time estimations are a hard problem.

jeez man is any of that really necessary

> I hope that you publish your work so that we may properly critique it.

I don't know about this author, but for me, this would emphatically not be a motivating reason to publish my work. I might publish work so that someone could get some use out of it, or to show off my brilliance. But if all you're going to do is critique it (no doubt with all the familiarity born of five minutes of looking at the tutorial), then I'd just as soon you never see it.

>Here is another language designer who is not me saying "closed languages die" (https://blog.golang.org/open-source).

I don't think the person you're quoting would advocate that languages must start out as open source. Go sure didn't. It was developed closed source within Google for two years before it was even announced.

I guess it depends on how the critique is delivered. I would unironically love it if an expert level in [repo language] would come along and critique my open source code.

If they were an arrogant shit-head then I'd probably just block them, regardless of the technical merit of what they wrote.

Sure. But an AC on HackerNews? Not so much.

casual criticisms on hackernews are still more valuable for drawing attention to your project than in depth comment chains from renowned experts on [repository manager of choice].especially since those comment chains are undiscoverable unless you're already interested in the project(or the chain gets linked on hackernews)

plus this may just be me as a non (designing a tool language for a project) dunce speaking but reading a critique of a language/framework that i haven't thought of makes me want to try out the language and see how that shortcoming affects the way i work. it's the reason i tried out Go and Elm and Vue.js

It's not necessary for it to be a general purpose programming language, though. I think it is kind of neat to have a very specific, personal language that fits your mental model. On a larger team, probably not what you want, but there are other languages that are optimized for that.

Basically you're right if the creator wants a widely-adopted general-purpose language. But there's other valid approaches I think.

> It is all too common in PLT/PLD for somebody to come in with bold claims, outrageous mockups, and zero toolchain

This. It's crazy to me how quickly developers are ready to get behind something without even being able to use it. Jai is similar in this regard.

It's easy to make wild claims like "super fast compilation" or "can be translated from C++" when you don't have hundreds of users, all finding edge cases and wanting different things. Especially easy when you haven't released anything so everybody is projecting their favourite features onto the language.

Like it says on the website, the language will be open-sourced later in 2019.

Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact