Nice work. One note: what you defined in your example is an algebraic data type, but not a generalized algebraic data type. The data constructors' are implicitly defined, as opposed to being explicitly defined. I liked this wiki page as a guide https://wiki.haskell.org/GADTs_for_dummies
> What will Verve do well, that no (major) language does well?
I'm always thinking about that, and I honestly wouldn't write a "proper" language (as in working on it full time, and expecting people to really adopt it) without having an answer for that. Nonetheless, I still see the value in writing things that are not ground-breaking, for the sake of learning. Just reading about compilers and PLT without any practice was really hard for me, and working on this language has been super helpful from a learning perspective.
I have had an idea concerning error handling, but I don't really know if it makes sense. There has been a push in the functionality community to do error handling with Maybe or Either monads or perhaps something similar to them. However, this creates the situation where many functions return these monads(two track values), but most functions accept as input simple values(one track values). So there is a constant need to lift functions. I just know a little bit of Haskell so maybe the burden of constantly lifting is not so awful, but it currently seems that way to me. I was thinking that a language that could detect this and then automatically lift the functions might be cool. However, the normal case is simply to push failures through the system, by skipping all functions after the failure, and this is not always the behavior needed. Perhaps it is ok to continue if only one of a set of functions succeeds, or perhaps one wants to retry some number of times or for some duration of time before giving up. It would seem possible to use monads to handle all desired cases, but not with auto-lifting. In any case, I really think that a better system for handling errors would be that killer feature that would make a language worth while. This is something that would pervade the languages library and thus make sense to have a new language. This video partially inspired the idea. Scott Wlaschin - Railway Oriented Programming — error handling in functional languages. https://vimeo.com/97344498 The other one is the language Icon where every expression has a success/failure property with automatic backtracking.
What you're describing as far as lifting is not unlike algebraic effects, like Eff[1] and Idris[2] have. Basically, you can have some sort of "exception" effect, except limited to particular type of error, and this system will mix them without having to use monad transformers. This particular application is not too dissimilar to checked exceptions, but the algebraic effect approach gives you a lot more power. It also makes it easy to deal with pure vs. impure functions, IO, etc. seamlessly.
Responding to internal errors differently based on the caller's desire is possible with Lisp; there's a good overview here[3]. It looks like algebraic effects can do something similar, as described in this paper[4] (search for lisp and the relevant portion should show up).
Another language worth looking into if you're interested in effect systems is Nim. It includes an effect system that deals with both exceptions and other effects (like for example IO read/write effects): http://nim-lang.org/docs/manual.html#effect-system-exception.... The exception tracking is similar to a checked exception system but is far less annoying than Java's. In my opinion it is the best way to ensure exceptions are handled.
I have a suggestion. What about not having integers and floating point numbers, only fractions. Some types of mathematics are faster and more precise when calculating with fractions instead of floating point numbers.
It would be an interesting twist if fractions were the default.
Reading through it, I'm unsure whether the VM is necessary or included. To me, 'unnecessary' and 'runs on it by default' seem slightly contradictory, or at least surprising. I don't have time to read through your source code right now, but if you're not running on your VM, what output do you produce? elf executables against the x86-64 Linux ABI?
I was also surprised that so many terms in your post have links to the definition in Wikipedia. After finishing the article and scrolling all the way down, I got to your paragraph that explained your rationale for doing so, so that's fair. However, I admit I expected these terms to link to some Verve documentation about that particular component or concept to explain details about your implementation. I realize that you may have a different audience in mind than, say, the Haskell or Python docs, but I wouldn't expect those sites to link to dictionary definitions at all.
Right now it runs on its own VM. What I meant by "the VM is no longer necessary" was that the language went from being dynamic (when I was just prototyping with lisp) to static (in the current state), so it should be easy to generate machine code ahead of time, instead of having an interpreter (or adding a JIT).
The reason the definitions link to Wikipedia is that this is the first piece of documentation on Verve ever. Hopefully I'll be able to cover most of it in proper docs later, but I thought for now I'd add the link to the definitions in case someone reading through was not familiar with them (I personally hate to stop reading to start googling for acronyms).
Thanks for posting the errors, I'll get ubuntu running and look into it. It seems to be just the __unused annotations and the non-portable pthread call, so should be easy to get it working. :)
Residents of the San Francisco Bay area should also check out Verve, a coffee roaster based out of Santa Cruz. There's two locations (the Opal Cliffs one on 41st Ave is nicer, IMHO, and easier to find parking for - plus there's the cliffs/beach nearby). It's one of the better roasters, the cafes have good atmosphere, and it's just one of those things you miss when you move away from California.
> But first of all, why am I writing this language? The short answer is: For fun.
> One of the first things I usually hear is “Why don’t you target LLVM?” (or some other runtime), and the answer is: because that wouldn’t be as much fun. Sure, it’d be much easier to get “production ready” that way, but as I said, the goal here is really to learn and have fun.
This is awesome. So glad to see people writing languages just for fun.
For anyone else into that kind of thing, check out #proglangdesign on freenode. Several really smart people who also just want to write languages for fun often chat there, about language design and implementation.
What I want to see is a functional language that embraces structural typing and extensable records/variants, instead of nominal typing. This would be an excellent fit for a world full of SQL databases and JSON documents. It would also be a considerable implementation challenge!
Like PureScript, Elms record system is very simple, it is extensible, but for example it cannot type a SQL join. It is possible to type a SQL join in Haskell using type level sets/maps and type families.
That doesn't seem to be too far adrift from how Dialyzer's success typing and Erlang work, though you'd have to create explicit spec definitions of all inputs and outputs of different shapes. There'd be little if no implicitness possible aside from implicit sub-typing.
Yes, nice example. Structural typing is a good fit for typing list/map centric code common in dynamic languages. It's probably Nominal typing that the dynamic proponents have rejected.
How would you type a projection or join operation with nominal typing? You could define and name a set of additional result types, but this is very cumbersome, the type system can and should compute the result type for us.
Yes it does partly, but it's mostly nominal. The records are extensible but they cannot type a SQL join (record union) and there are no structural variants. Ur/Web is probably a better example, but I'm not sure it's active.
I have a question about the "extern" keyword. If this is a interpreter, how this can possible work? ie: How is the interface between the host and the language?
I'm doing a interpreter (F#) and I have find that surfacing functions from the host is not that easy as I have imagined before.
`extern` means it's implemented in native, but it has to conform to the interface that the VM provides, and register that function with the VM. At runtime, the parser knows when a function is local (i.e. in the bytecode) or extern (i.e. C++). If it's local, it'll jump to the function's offset in the bytecode, if it's extern, the interpreter will use a native call.
Yeah, it strikes me as a bad idea. I mean, auto-returning the last statement is one thing but not allowing explicit returns means you can't use guard clauses.
I started with colons indicating the return type, but IMO it gets too confusing when you have functions as parameters. e.g. `foo(bar: (int, string): float): float` but that might be just personal preference.
Forth and Factor do a double-dash to separate the inputs from the outputs and multiple outputs are allowed. So your example could look something like `foo(bar: (int, string -- float) -- float)` or `foo(bar: (int, string -> float) -> float)`
That's not a dependency, that's a limitation: the interpreter is only implemented for one platform right now, but there's no reason why it shouldn't be possible to add support for a new platform without changing the existing code.
What I meant by no dependencies is that I don't aim to use any libraries/toolchains to facilitate the job of executing/compiling the language (or providing runtime support).
Until it can be compiled to 3d prints for fluidic logic [1][2], I think the absurd dependency of requiring electricity, at all, is really going to hold back this language.
I know functional programmers favor purity, but it seems reasonable for a compiler to compile its source code in such a way that it can run somewhere. But then again, maybe Haskell programmers would be satisfied with the mere knowledge that their code typechecks...
Don't pitch your software as dependency free if it depends on a certain architecture. There's no need these days with llvm. Otherwise make it clear you actually wrote an arch specific assembler.
Sorry, I shouldn't have implied you were intentionally misleading. I'm simply pointing out that by describing your language as without dependencies, you're just swapping out the explicit IR layer with a hard wired ISA layer (and linux abi), which is no less of a dependency.
I mean, GAS/x86-64 is cool, I enjoy toy languages as much as the next person—I just wouldn't describe it any other way than architecture-specific (and therefore useless to most people) so I don't have to click through if I can't play with it.
There's a simple Mark&Sweep GC included, where I just miss dumping the used regs.
In this state all stacks vars need to be declared volatile as in potion, so it doesn't miss any stack values which are temp. kept in regs.
But adjusting the stack up to the thread local values is cool. Didn't see this before