What design decision and features were made according to that? Basically, what "designed for game development" means?
Also checking the commit history, first commit  says that it is after a rewrite in Haskell. What was the original language and why made the change?
>Basically, what "designed for game development" means?
I've been doing a lot of work in Haxe, but have struggled with performance issues due to GC and memory churn. I wanted a lower level language with more control, but without sacrificing the ergonomics and abstractions I'm used to. Term rewriting and implicits are examples of features that let me layer abstractions onto a language but still maintain complete control.
>What was the original language and why made the change?
It's not up on GitHub, but the prototype was begun in Rust. Ultimately I found I'm much more productive in Haskell and made the switch.
The reason being that I think the way to really move up the baseline is to take another page from Haxe and aim for cross-compatibility between different systems languages, becoming a "language that unites them all". So that could mean relatively new and hip ones like Rust, D or Zig, or older ones like Pascal(Delphi), Fortran, Cobol...
And those are niches relative to C, but they're nearly unserved niches AFAIK.
Also the C IR code might make it easier to interop with other languages, since C is basically the 'lingua franca' that all other languages can talk to in some form.
The downsides of leveraging a platform's given C compiler as your backend is that now you'll be spending a large chunk of time learning the hard way about all the various incompatibilities in C compilers, and adding workarounds in your frontend for the otherwise-unpatchable behavior encountered in every old version of every major compiler on every platform you wish to support.
Conversely, the advantage of using a single known backend is that you can automatically know what code your users are running when they submit a bug report, and you can fork and ship the patch yourself rather than trying to convince an upstream C compiler to accept the patch, then convince the upstream distro to ship the new version of the compiler, then convincing all your users to update their platform C compiler.
Walter Bright concurs: https://news.ycombinator.com/item?id=16195031
Kit may have an easier time of it if it's intended only for games, since that narrows down its supported target platforms substantially. The harsh truth is that most game devs only care about one platform (Windows) and one toolchain (MSVC), so at least it can tailor its output to appease MSVC specifically.
I was curious, but there aren't any examples of this on https://www.kitlang.org/examples.html
Zig is awesome - I've used it and would use it again, I'm funding the creator on Patreon, and I see these two languages as filling slightly different niches and having different pros and cons.
EDIT: Your responses in these threads have been great, by the way!
Aw snap, the gold rush is real..
They seem like a variant on C macros, but without the restriction on function syntax. Maybe one could implement Python's with statement in userland, which is very nice.
It also seems to me that you could replace C++ templates, yet you do have generics. How come? What's the difference?
My intuition is that ambiguities are at least possible, so what about precedence, which AST term is evaluated first? I'm thinking of some situation where two subtrees match the rule, but you end up with different results depending on which is transformed first. E. g. if you were to define a cross product rule, and apply it to a × b × c.
What about infinite loops in rule sets? What about recursion?
What about a situation where multiple rules could be applied? Is there any way to debug the transformations? Is that what the `using` scopes are for? If the compiler was to detect ambiguities, that seems pretty expensive, because it would need to match every rule to every subtree, right?
This thing seems very, very powerful, I just find it a bit hard to grasp what you can do with it in practice, and what the limitations are. I would be very interested to hear what you ran into so far.
>My intuition is that ambiguities are at least possible, so what about precedence, which AST term is evaluated first? I'm thinking of some situation where two subtrees match the rule, but you end up with different results depending on which is transformed first. E. g. if you were to define a cross product rule, and apply it to a × b × c.
This is absolutely a potential issue, and the examples I have up now are quite bad. This is why it's important for rules to be strictly scoped. The only rules that can affect an expression are the ones you've explicitly brought in with a "using" block, or those that are defined for the types you're directly working with. Given the scoping, I think overlapping rule applications will be uncommon in practice.
>What about infinite loops in rule sets? What about recursion?
There's a sanity limit on the number of times an expression can be rewritten, and it'll show an error in that case which displays the transformations that triggered the limit (including the first, last, and any non-repeating transformations going backward from the last - so it should be very clear.)
>Is there any way to debug the transformations?
This is definitely an area for improvement. I'm planning to (1) add enforced "rewrite this" metadata that will fail to compile if the expression isn't rewritten, and (2) enable dumping not only the final AST but also all of the transformations that occurred.
How abstract is that initial AST? Can you use these rewriting transformations to expand the grammar?
With that said, down the line I plan to add procedural macros, and those will likely be lexical (they take a series of tokens as input, which doesn't need to parse into valid AST.) If I do go that route, such macros would have to be invoked explicitly.
Are people not skilled enough to propose or send commits to an existing or alternative programming language?
There is Rust, Go, Julia, Clojure, and many other...
What is the primary reason to create a new programming language?
There are new type systems and static checks, integrated build systems and package managers, backwards-compatibility features, debugging tools...
What is the primary reason to complain about new programming languages?
How would those "alternative programming languages" you propose have been created if people followed your "advice"?
We'd still be sending patches to C and C++ or perhaps Algol?
>What is the primary reason to create a new programming language?
1) Because you feel like it and you don't need to have approval for anybody to do so.
2) Because you want to explore some particular syntax/semantics combination other languages don't offer.
3) As a training exercise.
4) To cover some very specific needs you have, and don't like how other languages do it.
5) To introduce some new ideas into PL design, which might or might not be adopted by users or by a more mainstream language.
With so many there is also a very low chance of any of them actually picking up because the useless ones drown out the C++es.
I think people should do what they want though. If you want to make a programming language you should, just don’t expect everyone to view your endeavors as useful.
That being said. Making a PL language is a much better use of your time than whining on HN.
They did not come from JS. JS took them from Scheme. Scheme took them from ... lambda calculus? Anyways, not knocking on JS, but they did not invent lambdas by any stretch of the imagination. I do agree with the rest of your comment btw.
Face it - people use JS because the browser happened, not because it’s original in any meaningful way.
Even Clipper had support for closures.
Lambdas are actually so hipster that they were cool before we even had programming languages: https://en.wikipedia.org/wiki/Lambda_calculus
I also tend to disagree with the overall sentiment that new languages aren't a good way to influence programming. Haskell, for example, is quite a departure from a language like C. I don't think we would've ended up with the great, proven options we have today if everything was just incremental changes on some base. Sometimes you need to rethink things from scratch. It's part of the reason we don't have one language that fits perfectly for all problem sets.
There are plenty of interviews where he tells how C with Classes came to be.
Copy-paste compatibility with C is a burden on modern computing, but it was also what contributed to C++'s adoption during the early 90's.
As in the human conlang case, it would be alarming or bizarre if the inventor genuinely expected others to start coding in their language, especially if it has no compelling features that are not found elsewhere: although Perl, C++, Rust and others came primarily from motivated individuals "scratching an itch", those langs were able to flourish because they fill(ed) an empty evolutionary niche.
What I do find peculiar is other people then piling in with all sorts of suggestions for enhancements and tweaks that imply that the particular hobby language in question does have a legit future out in the field.
I think the primary value of these new toy languages is they can explore aspects freely that can't just be explored in a more mature language. Sometimes after a concept has proven useful a mature language might be willing to incorporate it.
But I wouldn't use them on a serious project, because there will probably be few libraries and maybe no long term support for it.
I'll just comment on this part. I'm a contributor to Haxe (not so mainstream perhaps but more so than Kit.) Kit was a chance to try out some features that don't fit in well to Haxe (and in fact I did propose several of these features as additions to Haxe, and they were explicitly rejected.) Sometimes the answer really is to write a new language, because language simplicity is still important and one language can't be everything to everyone.
And of course, Haxe wouldn't exist if its creator hadn't been unsatisfied with existing languages at the time...
have fun with that
The reason is that creating a new programming language is easier than learning C++.
For this same reason none of these languages will ever amount to anything useful. If you have the patience and foresight to bring a new programming language to fruition, then you have the patience and foresight to learn C++. You grow up and start programming in a real language.
I like to think that when I'm writing Kit, I'm writing C, so I am writing in a "real language." Total interop, no real drawbacks.