Hacker News new | past | comments | ask | show | jobs | submit login
I like Odin (hasenjudy.wordpress.com)
137 points by hsn915 on Aug 28, 2022 | hide | past | favorite | 204 comments



I have high hopes for Odin. There has been a recent trend of C/C++ replacement languages like Zig,Rust,V, etc. with each one trying to find its niche and triumph over its competitors.

Odin's niche is the kind of high performance programming that's done in games and other real time visualization applications.

Other than fixing C's known issues, like having proper tagged unions, fixing macros, and the crazy compilation model among others, it has a bunch of features, that make it appeal to game programmers.

Features, such a structure of arrays support for data oriented design, builtin support for matrices, vectors, and shader-like syntax for 3D math, allocators and the like.

And it certainly doesn't hurt that unlike many of these upstart languages, Odin has a real, successful, commercial product built in it, EmberGen, which is used for CG smoke and flame simulations:

https://jangafx.com/software/embergen/


A small but (IMHO) very neat detail is the Pascal-style syntax for defining variables (without requiring a separate 'var' or 'let' keyword):

    // inferred type:
    a := 23

    // explicitly typed, the type is squeezed between the : and =
    a : u32 = 23
OTH constants now require special syntax:

    a :: 23
...but at least this is consistent with other places in the language, like functions:

    my_func :: proc(...)
...and 'squeezing in the type' also works as expected for constants (which admittedly looks a bit weird, but is also hardly needed):

    a : u32 : 23


So, the pattern is:

    <const_name> : [type] : <value>
    <var_name>   : [type] = <value>
where the type is always optional as it can be inferred?

That's why `a :: 32` declares a constant, but `a := 32` declares a variable...

This is pretty neat, actually.


This pun originates from Limbo, I think. Go inherits the colon-equals for the inferred-type definition, but not the colon for type ascription, yet another way in which it could have been beautiful but isn’t.


It's slightly older than Limbo, and comes from Newsqueak. The difference is that Newsqueak used keywords for the other non-variable declarations. `x: int = 123`, `const X: int = 123`, `type My_Int: int`, etc.


Odin sounds like a nice improvement over C, however this caught my attention:

> If your “dread” of C comes from fear of memory management, then Odin is probably not for you, and dare I say, maybe systems programming is not for you.

I have to say that my dread of C definitely comes manual memory management. The awkward syntax and compilation model I can tolerate. But having your program expose critical security vulnerabilities because you forgot a weird edge case while managing a pointer is really worrying. For simple programs is not that complicated, but for complex multi threaded ones it becomes really hard. And the fact that event the most expert programmers make these mistakes, leaves me not much hope.

So perhaps systems programming is not for me? But what exactly is systems programming? Is it developing OS kernels, writing drivers and embedded microcontroller systems? Or is it more.

I'm not very interested in writing any of those. But I do want to write programs that are fast and run as fast as C or C++, and I want a language which allows good control of resources and has minimal overhead. Sometimes these programs are multi-threaded and quite complex, so I want memory and type safety and I want tools to crate abstractions to tame complexity a little bit. Is all this out of the systems programming definition?


Rust sounds like what you want, though be prepared to deal with near C++ levels of complexity at times (lots of Rust code is too macro happy for my tastes).

That said, the way to deal with memory management in C is to... not do much of it. I know that sounds like a cop-out but the patterns you're supposed to use in languages like Zig and Odin are the same ones you'd use in C to keep your mind sane.

Do not do Reference Counting or Single Owner + Borrowing (RAII), it'll drive you insane without the automation languages like Swift and Rust or C++ give you.

Instead, the way you're supposed to do things, are big "manager objects". Only these manager objects are ever explicitly allocated and freed, everything stored within them is managed by them and they expose only safe handles to the outside world (e.g. generational handles).

Most of the time these manager objects will store some dynamic arrays, hash maps or pools within them that get freed when they are also freed. Note that unlike RAII there is no real "nesting". The managers are responsible for the lifetimes of their whole "object tree".

Any temporary data should be allocated using a temporary allocator (like an Arena allocator) which gets freed at the appropriate place for the application (end of a frame in a game, or end of a request in a web server). Never store pointers to something within the temporary allocator in "long lived" data structures inside the manager object.

Follow these rules and things get manageable, Zig and Odin have lots of facilities in their standard libraries to make this easier, in C you're mostly on your own but there are some libraries you could use like the Apache Portable Runtime.


I’ll admit, the fact that Hello World requires a macro, dissuaded me from learning Rust for years.


Technically, you don't need a macro at all:

    use std::io::Write;

    fn main() {
        std::io::stdout().write(b"Hello, world!\n").unwrap();
    }
The reason people often use println! is that println! is variadic and can support any number of arguments to format in the string. For example:

    println!("x is: {}", x);
This also has the benefit of allowing type checking at compile time, as opposed to using a function like printf in C, which does not have such power. Additionally, this allows Rust to take references to the variables entered without the programmer's specification, in order to prevent unnecessary copying.

These reasons tie directly into Rust's philosophy of making it easy to write reliable, performant code.


You’re using the past tense. Did your mind change?


Yeah this kind of framing always rubs me the wrong way. I don't "dread" manual memory management. Indeed, from a personal standpoint I really like thinking about stuff like that while I'm programming. It's part of the fun puzzle aspect that got me into all this. But as a professional, I know that there is heaps of evidence over many decades that it is unwise for me to indulge this fancy, when working on real systems that real people use and which may be exposed to the internet. I consider it a bummer, but I'm very persuaded that this is unwise for pretty much all of the software I work on.


You can have safe manual memory management. The main cases of bugs are:

1. Null pointer deref. Can be fixed by having optional types and requiring that possibly null pointers have to be wrapped in them.

2. Out of bounds references. Can be fixed by making the type system track how big all objects are, and having the compiler insert bounds checking.

3. Use after free. Can be fixed by the free function zero'ing heap objects smaller than a page (eg 4KB), and unmapping larger ones so that future accesses are a seg fault. The heap also needs to not create new objects at the same address as deleted ones, but we have 64-bit address spaces, so maybe that's fine.

These all have costs, but so do all solutions to these problems.

I can't think of any reasons that a language with manual memory management has to be less safe than one with a Garbage Collector / ARC.


You are correct and Odin supports all of this.

`Maybe(^T)` exists in Odin.

Bounds checking is on by default for all array-like access. Odin has fixed-length arrays, slices, dynamic arrays, maps, and #soa arrays, all of which support bounds checking. Odin does not have pointer arithmetic nor implicit array-to-pointer demotion which is pretty much removes most of the unsafety that languages like C have.

Odin also has built-in support for custom allocators which allows you do a lot more with extra safety features too beyond the default allocator. Use-after-free is usually also a symptom of an underlying value responsibility and lifetime problem rather than a problem in itself, of which is fundamentally an architectural issue. Ownership semantics in a language in Rust does deal with this issue BUT it does come at a huge cost in terms of architecting the code itself to accommodate this specific way of programming.

There is a common assumption amongst many of the comments that if you have manual memory management, you are defaulting to memory unsafety. This is untrue and memory management and memory safety are kind of unrelated in the grand scheme of things. You could have C with GC/ARC and still have all of its memory unsafety semantics.


Odin and Zig solve almost all the memory safety aspects, except for use after free. Having bound checks, union types, etc. is good and a real improvement over C, but I think that use after free is a memory unsafety issue. It's the one that's harder to tackle because it's more dynamic ("temporal memory safety"?).


"Use after free" is a symptom of other problems. You can "solve" it by making it very different to do in the first place with something ownership semantics, but there are usually better ways of dealing with it in the first place.

One really good approach is to not use pointers in the first place and use handles. I highly recommend this post for more information: https://floooh.github.io/2018/06/17/handles-vs-pointers.html

Because use-after-free is a responsibility problem, handles are a way to make sure that a subsystem has responsibility over that memory directly rather than have it spread out across the program.

This is why Odin nor Zig "solve" this problem: solving it at the language level is not necessarily the best option.


That’s essentially what Rust does: it makes using handles to get temporary access to memory that are owned in a single place very attractive because otherwise the borrow checker will yell at you.


the last thing is safe concurrency. this is the greatest achievement of the rust borrower checker imo. the ownership model means data races are detected at compile time, eliminating a huge class of concurrent programming bugs without forcing a message passing architecture. deadlocks are still possible if you have two or more mutexes but that’s a much harder problem


By "manual memory management", I think of explicitly allocating and freeing memory. Assuming you're thinking of the same thing, are you suggesting that the compiler (or other static analysis) could catch all of the issues you listed? If so, the natural question is, why is it necessary to manually insert the allocations and frees, if the compiler knows where they are supposed to go?

This path leads you to something like Rust, which is safe without garbage collection or ARC, but I also wouldn't call it "manual memory management". The trade off they took for this is complexity in the language.


GP's solutions to problems 1 and 2 are the same as Rust's; the difference is problem 3 ("temporal memory safety"). Rust solves this problem with statically analyzed lifetimes, which, in addition to the safety and correctness advantages of compile-time checking, also permit RAII (which answers your "natural question" with "no, it's not necessary to do that"), which makes programming more pleasant. The disadvantage is, as you said, language-level complexity, since it's not enough for the programmer to be personally satisfied that the lifetimes are correct; they have to be represented in the code in a way that satisfies the borrow checker.

GP's proposal is to drop lifetimes and RAII (thereby simplifying the language semantics), make the programmer responsible for allocations and frees (as in C), and solve the temporal memory safety problem by doing additional work at runtime to ensure that use-after-frees reliably crash the process instead of overwriting return addresses or doing other arbitrarily bad things. Whether this is more fun to program in than RAII depends on whether you think it's better to suffer from too much abstraction or too little; people have sharply diverging intuitions on this and it's been a holy war since forever and it probably always will be.

The clearer-cut problem is that such a language would be slower than C or Rust, both because freeing memory involves extra work that C and Rust programs don't have to do, and because the requirement that use-after-frees must reliably behave a specific way inhibits optimization, since the compiler can't assume that use-after-frees don't occur. Also, using memory from a small allocation that's been zeroed out doesn't reliably crash the process unless a pointer in that allocation is dereferenced, and even then, this (contra point 1) would require the compiler to assume that null pointer dereferences can happen and must segfault, which, again, inhibits optimization.


I agree with all of that. How theoretical are the optimization gains from the extra constraints guaranteed by Rust's borrow checker?


Well put!


> Assuming you're thinking of the same thing

Yes.

> are you suggesting that the compiler (or other static analysis) could catch all of the issues you listed?

Yes, the compiler for the first two and the standard library's heap implementation for the third.

> This path leads you to something like Rust

There are stops along this path before you get to Rust. If you just add the 3 things I mention above to a C like language, it would still be perfectly possible to leak memory. But that isn't a safety problem.

The 3 things don't include a borrow checker. You could still make doubly linked list and graph data-structures.


That makes sense! This isn't the set of tradeoffs that most appeals to me, but I can see your point that it's a different set of trade offs that may be good ones.


An additional tradeoff is that many safe data structures or algorithms are impossible to implement in safe rust. Doubly linked lists being the canonical example. However I feel like the juice is very much worth the squeeze and most rust developers won’t run into these limitations regularly.


Yeah, but I think this is also captured under the complexity trade off. It is possible create a correct safe interface over an unsafe data structure, but it is complex to do so.


If you had this belief say, 10 years ago, and because of it you decided to write your serious application in say, Java, you would have been in for a huge unexpected surprise last year when the now infamous log4j vulnerability was made public.

See, having memory safety did not prevent the language from causing arbitrary code execution vulnerabilities. Having the log4j project be open source and popular did not prevent that either (so much for the "enough eye balls" theory).

Going back to Odin, when I think memory safety is not as big a concern as people make it out to be:

Your only source of concern is C.

This would be like judging SQL statements as fundamentally unsafe because websites written in PHP tended to (specially in the early 2000) be written in a very unsafe manner where user input was put directly into SQL strings.

The lesson that people took is not to throw SQL out the window, but to properly sanitize user input before passing to the queries, and to never use plain string concatenation when doing that.

So for manual memory management, the lesson to take from the vulnerabilities that C has caused is not that manual memory mangement is bad. It's that you need some facilities in the language to minimize the chance of them occurring by several orders of magnitude.

Odin does this by providing the slice type (and string type) that have their length known and providing several custom allocators out of the box.

The cool thing about the slice type is not just that the length is known: the language provides facilities for iterating over the slice that automatically never goes out of bound:

    for item, index in slice {
        // do something
    }
This, and providing a "string builder" type into the core library that lets you dynamically construct a string in a safe way (you don't have to write the code to grow the string dynamically because it has already been done).

These features make the "fear" of unsafe memory access largely unwarranted anymore.

What remains is a matter of what attracts you to programming: are you interested in having explicit control over a system to make it do what you want, or are you more interested in expressing some abstract ideas in an abstract mathematical virtual machine? If the latter, you might find Haskell or Lisp more appealing.


This is the usual speach against memory safe languages adoption, while ignoring the nice security equation,

    Σ exploits = Σ memory_corruption + Σ logic_errors
Having Σ memory_corruption ==> 0 is of course much welcomed outcome, even if Σ logic_errors > 0.


Highlighting a different systemic problem is not at all an argument against avoiding this problem. Indeed, if memory safety were the only problem to worry about, then I'd be less worried about it, because we could spend more time on it. But it isn't, there are all sorts of other things to worry about, simultaneously.

> This would be like judging SQL statements as fundamentally unsafe because websites written in PHP tended to (specially in the early 2000) be written in a very unsafe manner where user input was put directly into SQL strings.

This is indeed a good example, but supports my argument rather than yours. If we had devised ways to make it impossible (or incredibly hard and weird) to introduce sql injection vulnerabilities at the language level, then that would be excellent. One less thing to worry about!

I don't have a unique grudge against memory safety issues. It's just one type of issue that we've spent a lot of time devising solutions to that don't just boil down to "be very careful" and I'm generally supportive of any solution to any problem like that, if the tradeoffs are acceptable.

But I agree with you that it's a great thing to have language and library support that make memory safety issues significantly less common and problematic (to be clear, I only just heard of Odin from this article, but I think it looks pretty awesome on initial glance), and that that gets to the level of solution that we have for sql injection in practice. I think there are somewhat better alternative solutions available in the case of memory safety, but they have different tradeoffs.

> What remains is a matter of what attracts you to programming: are you interested in having explicit control over a system to make it do what you want, or are you more interested in expressing some abstract ideas in an abstract mathematical virtual machine?

I'm mostly interested in the first thing, but I think these are both false choices. There is a language that provides that explicit control with more memory safety (rust) with a different trade off (language complexity), and most other languages are memory safe without being focused on expressing abstract ideas in an abstract mathematical virtual machine (go, java, python, etc. etc.).


Seems like you need Go then? You mention you want memory management, type safety and proper abstractions, hence the suggestion.

Of course, you also mentioned C's syntax as awkward for you, so you might feel the same with Go's syntax.


Go is not a fast language compared to C/C++/Rust. It’s more in line with Java as it’s also garbage collected.


Go, despite using GC, is still a compiled language. From what I've seen (including various test results), Go is usually faster than Java.

There is also the issue of, what is fast enough? A lot of times, people don't really have the speed requirements they say or think they do. To include the code they have written, could be better optimized. It's often better to be more specific about what the requirements are, before making blanket statements about a language not being fast enough. Clearly, many people are fine with how fast Go is.


That's fair. Go is not as fast and cannot manage resources as well as the languages you mentioned.


You state that as a blank and white fact, but there's nuance.

https://github.com/lotabout/skim/issues/317#issuecomment-652...


The wikipedia definition of systems programming seems to exclude it

https://en.wikipedia.org/wiki/Systems_programming


I'm not sure what you mean by "it" but nothing in that page precludes languages with runtimes from writing system software. I'd say Go can definitely claim to be a systems language.


I think Rust is what you need. It basically won’t compile if your code would have runtime errors or memory leaks, except for cases you need to explicitly handle. The linters in VSCode also make it pretty practical to use. It’s been a joy to program in since I started about a month ago.


> It basically won’t compile if your code would have runtime errors or memory leaks

While Rust does move a lot of errors from runtime to compile time, there are still a lot of ways to create runtime errors (which must obviously the case if your program handles any input at all).

Rust also does not stop you from creating memory leaks; there's even the `Box::leak()` method[1] that allows you to simply leak a heap allocation.

[1]: https://doc.rust-lang.org/std/boxed/struct.Box.html#method.l...


The difference is you need to explicitly handle cases that would cause runtime errors where you have to use .unwrap() or match selectors, which makes it safer in general. Since the values are wrapped in Result or Option enums. Code that doesn’t create these types isn’t possible to have runtime errors which reduces cognitive burden when coding.

And yes you can write non-idiomatic Rust that lets you leak memory but it doesn’t happen by accident like in C/C++.


> Code that doesn’t create these types isn’t possible to have runtime errors

Rust code can still panic and unwind at runtime. There's some ongoing work on supporting guaranteed-not-to-panic code for very specific uses (similar for guaranteed-not-to-leak, which is a related problem), but it's a long way off and will not be applicable to anything that must interact with the system in any way.


And the compoper error messages are fantastic for beginners.


What is "compoper"?


A typo of "compiler", I'm fairly certain.


Surely a compoper makes a new pope.

Exit code of white smoke signals success.

Be sure to run decompope first though.


Why call it the Great Schism when we can call it the decompoposition?


I think Odin just doesn't do enough. The improvements over C are there, but imho too small to justify/motivate a large-scale change.

And personally I think (!) there is no reason to introduce a new programming language without RAII these days. If you don't solve memory management any more than C did (not), then you are ignoring the biggest problem that needs solving and every replacement that does address this problem looks more attractive.


And for you, you clearly don't need manual memory management nor high control over memory, memory layout, and memory access. And that's absolutely fine, but don't criticize something which other people REQUIRE and DESIRE. You cannot "solve memory management" because there isn't just "one problem".

As for RAII, the article itself showcases some of the alternatives Odin has with the `defer` statement and `deferred_*` attributes. So Odin can have many of the aspects of RAII without tying it directly to a data-type/struct/class, which itself as many costs and trade-offs. RAII has loads of problems to it and many other things which are required to solve those problems.

This language is just not for you, and that is absolutely fine! But don't proclaim that it's "ignoring the biggest problem that needs solving" when you are assuming everyone has the same needs, requires, and desires as yourself.

n.b. I am the creator of the Odin programming language; go use a language that will help you solve your problems with your requirements.


Quite the outlashing.

One can use both default RAII in a programming language as well as opt-in unsafe memory management. So you are clearly wrong in your assumptions about what the OP must clearly not require/want.


Still no. Many problems require manual memory management of which custom allocators aid a lot with. Odin has extensive support for custom allocators and many other things.

RAII may not even make sense within the type system of a language too. Languages with RAII (C++, D, Ada, Rust (through Drop), and Vala) all have higher level constructs such as methods, and many have automatic memory management too (Rust's is automatic but at compile time).

First let's take C++ as an example of a RAII language, RAII has numerous flaws to it:

* In practice (but not necessarily) couples allocation/initialization together meaning that allocations are rarely bulked together and are scope-governed (which can be very poor with performance). (And I know placement `new` exists, but that isn't implicit RAII any more and doesn't solve any of the other problems)

* C++ originally only had copy constructors which usually involved loads of implicit allocations everywhere. This lead to copy-elision optimizations and the introduction of move/ownership semantics as a way to minimize these issues.

* RAII is very implicit to the reader that it is even happening, meaning you cannot just read the code and know what is happening.

* Ctors and dtors usually have to assume to never fail, or require exceptions to handle the failure cases. And not everyone wants, or can even have, software exceptions.

In sum, adding RAII is not a minor thing and to make it even useful without too many flaws requires loads of extra things on top. It's not a simple construct.

I also never said anything about memory safety in my comment. You can have memory safety and manual memory management. The question is what level of safety and in what form. Odin has pretty much all the general memory safety features (bounds checking, Maybe types, distinct types, no pointer arithmetic, default to slices, virtual memory protections, and many more). What Odin does not offer is ownership semantics and lifetime semantics, which is an entire discussion itself which I won't talk about in this already long comment.


I want to address some points for people that are reading and don't have enough C++ experience to judge that they are not as serious as they might seem.

* You can just have your constructors not initialize your member variables. If they don't have default constructors that do work, then no work is done.

* Yes, originally. Move-semantics are part of the language since C++11. Enough time has passed.

* I read and write C++ every day at work and in my free time and I rarely find this to be a problem. If I see a non-trivial type, I assume it's destructor is called at the end of scope. And whether I have to look up the destructor of some type or a corresponding free function (like in C code usually), makes no difference to me.

* Gamedev can live entirely without exceptions and many other software projects do as well. You just have to write your constructors so that they do little work (which is encouraged anyways) and use methods to initialize them. Having static methods that return an optional<T> is also pretty common.

Of course you can also have constructors that do a ton of work, so you never know what is being done, exception handling everywhere to error handle all that code and not use move semantics or very old C++ versions. There are always ways to use a language in a bad way and get bad results. I am sure you have seen bad C code.

Of course RAII is not simple. It's extremely complicated, which is why I consider it missing. Using a new language needs be justifyable by some significant added value.


> You can just have your constructors not initialize your member variables. If they don't have default constructors that do work, then no work is done.

So your constructors don't do any construction?---defeating the entire point of a constructor.

> Yes, originally. Move-semantics are part of the language since C++11. Enough time has passed.

First, move-semantics solve a lot of the issues with copy-constructors in C++, as I previously stated. Secondly, what has time got to do with this? It's a solution in C++'s RAII with copy-constructors. Not the only possible solution but the one that the C++ committee settled on.

> I read and write C++ every day at work and in my free time and I rarely find this to be a problem. If I see a non-trivial type, I assume it's destructor is called at the end of scope. And whether I have to look up the destructor of some type or a corresponding free function (like in C code usually), makes no difference to me.

I'm also assuming that your code base uses constructors and destructors all over the place and is absolutely fine with the added costs of constructors and destructors. And that's fine. But they are implicit and when reading the code, you don't necessarily know if a constructor is being called from just reading it. That is just a statement of fact and not really a criticism.

> Gamedev can live entirely without exceptions and many other software projects do as well. You just have to write your constructors so that they do little work (which is encouraged anyways) and use methods to initialize them. Having static methods that return an optional<T> is also pretty common.

Firstly, is the argument here to make constructors only do trivial things in?---(which rarely ever happens in practice, especially if you use anything from the STL). Secondly, many game devs usually have have an explicit `init` method too for the exact reason you can separate allocation and initialization, and have the ability to handle failure cases with `init`, usually with a return value indicating this failure state. You can have static methods, yes, but then you are literally getting around the construct of an implicit constructor and having an EXPLICIT construction call.

RAII itself is actually very simple, but to make it useful is complicated and complex. The issue with RAII is not necessarily the scope-exit semantics but rather coupling this within the type system itself as a way to have scope-hierarchical-based management of resources.


I absolutely agree, I've written a few hundred thousand loc of C++ at this point and I don't remember one time where I felt that having RAII was anything but great. It made me absolutely hate whenever I had to work in other languages Java and C# and had to remember to release non-memory resources manually.


That’s basically any gced language and the fix is present for decades at this point: try-with-resources, ‘using’, ‘with’ (with-whatever IIRC can be traced back to Lisp…)


no, those are absolutely not fixes: you have to remember to use "using", "with", etc. whereas you have to go out of your way to circumvent RAII


Non-issue. You can use fopen in C++ and remember to fclose, too.


No, every time you have something that sound like "you just have to [...]" it means future bugs that were entirely preventable. The only correct way in c++ is to wrap it in a unique_ptr or handle type so that fclose gets called automatically when you are done with the resource


Use static analysers, just like you have to do on C and C++ to handle many of their flaws.


The value is having an alternative closer to C and farther away from the garbage lang that C++ has become.

No one is expecting you to rewrite your entire project in Odin but those that are starting new projects have a high value choice with Odin and others.


My point was that I don't see much of a point in yet another language that is C with an extra 10%. There is another one like that almost every week.

Plenty of applications that require manual memory management have been written in C++ or Rust, which do provide RAII. If you know that manual memory management is required for some applications, you also know it's not 100% of the code that needs it.


Do you honestly think Odin is "C with an extra 10%"? Or any of the decent alternatives that are available (e.g. Zig, Jai, etc)? Because if you really think that is the case, you know nothing about these languages nor have ever used them. All of these languages provide a hell of a lot more than "10%".

As for RAII, I do explain in another comments (https://news.ycombinator.com/item?id=32629951 & https://news.ycombinator.com/item?id=32631462) why RAII has alternatives and not necessary for every language. If you truly believe that RAII is necessary for you, then use a language that has it (e.g. C++, Rust, D).


[flagged]


You're right. I'm sorry. It got too heated below my initial comment and I got upset. I'm not in a good place currently and I should probably refrain from commenting at all. And yes, I am not fun at parties.


Hope you get to a better place soon. I'm not as fun at parties as I used to be either. It's been a rough few years. Best wishes


As someone who programs professionally in C but writes Odin as a hobby, Odin contributes more like 1000% ease of use, comparatively. And I bet that scales up even more on a team as compared to C.

ASAN, complex build tools, undefined behavior, implicit type conversions, etc, each of these contribute substantial problems all on their own, and they basically don't ever arise in Odin from the start.


But safe memory management is the biggest problem that needs solving. It’s empirically impossible for humans to write secure, complex programs while managing memory by hand.

I agree with OP, that Odin doesn’t go far enough toward safety in this regard. We do all require and desire security.


What it is interesting is that the comment never mentions memory safety, only memory management; you just added memory safety into the discuss.

Memory safety and memory management are not equivalent concepts. As I state in another reply (https://news.ycombinator.com/item?id=32629951), you have memory safety and manual memory management. And Odin offers numerous memory safety features as well as being designed around the power of custom allocators.


> What it is interesting is that the comment never mentions memory safety, only memory management; you just added memory safety into the discuss.

Can’t speak for OP but I also assumed that you were alluding to being able to use unsafe memory management in this part

> > And for you, you clearly don't need manual memory management nor high control over memory, memory layout, and memory access.

Since high control usually means being able to do whatever you want (compilers always have to be a bit conservative).


"usually" by whom? "High control over memory" doesn't necessarily mean doing anything "unsafe" with it either. That term could allow for "unsafeness" yes, but it is not solely restricted to that. High control could just mean adding extra annotations about the access, or not having padding in a struct, or specifying the alignment, or many other things, none of which may be classed as "unsafe".


I don't think any feature should be a mandatory part of a programming language, provided its reasons for not including it make sense. I'm fuzzy on Odin, if you showed me some Odin and some Hare I bet I couldn't tell them apart, but it kinda looks like a nice C with syntax inspiration from Go and (hence) Pascal.

A language like that, as long as the object code is correct, then hey, it's there for people who want it.

I think Zig, which I understand better but don't use, has a good reason for not building in RAII. Zig has a "no allocation without an allocator" rule, and what's easiest to describe as native valgrind. It also has comptime, and RAII loses some conceptual simplicity when you start doing transformations between what-you-see and what-you-get.

I think it's worth continuing to explore whether that's a sufficient approach to memory safety, rather than doing something which has already been done, prematurely. I happen to think it's a promising approach, favoring clarity and simplicity.

We'll see. Zig can always add RAII later, and so can Odin. But at most once each, so it pays not to be hasty.


Regarding Odin and Hare, I'd argue most people would easily be able to tell them apart from the syntax alone because the declaration syntaxes are quite different. And most people judge a syntax pretty much solely by the declaration syntax itself.

Regarding RAII, I explain in another comment (https://news.ycombinator.com/item?id=32629951) the issues with RAII and what it requires to be useful. Odin has the similar rule of "no allocation without an allocator" but has the implicit `context` system which passes the current context's allocators around. Odin also has core library support for valgrind, callgrind, memcheck, and soon helgrind.

Odin has features (which this article does showcase) which are akin to RAII but are not attached to the type system. And as I explained in the previous comment, RAII wouldn't even make much sense in Odin (or Zig for that matter) and would probably never a good idea to add it later. There are better alternatives to RAII for languages such as Odin and Zig.


Hi Bill, wasn't intended as a criticism of either, or as an observation about the syntax choices, but rather as a way of saying how familiar I am with both: I could say some things about them which are accurate, but not tell code samples apart.

RAII isn't something I happen to be looking for in a language, it kind of falls between Rust and your work in a way that I don't have a use for.

It's an interesting domain, isn't it? You'll always have people reacting from the hip to the problems C has given us, but they'll be back a couple hours later to talk about the wonders of SQLite.

SQLite got there by being careful and building tools to keep them out of trouble. I see an important role for languages which build those tools in, but don't try to replace a manual transmission with an automatic.


I never took it as a criticism, rather a clarification for others and to explain how Odin does things.

And it is indeed an interesting domain. Something to be very careful before making rash opinions on too. It is such an unexplored field so much potential!


How do you write your garbage collectors or RC without precise manual memory management?

How do you render billions of vertices and maintain a gameplay loop under 8ms for AAA games without precise manual memory/layout management?

Not everyone is writing websites in javascript


I guess they are referring to something like Rust which manages to "solve memory management" while still letting you write high performance code.


Here, a GC implemented in Oberon, a GC enabled systems programming language,

https://people.inf.ethz.ch/wirth/ProjectOberon/Sources/Kerne...

For something more modern in D, also a GC enabled systems programming language,

https://github.com/dlang/dmd/tree/master/druntime/src/core


D is GC by default (and currently required for larger parts of stdlib) but also allows disabling GC and doing manual memory management.


Just like plenty of GC enabled systems programming languages, this isn't a XOR.


D uses manual memory management with (malloc/free/realloc) to implement their GC

So i don't know what point you are trying to make

D is pragmatic and confirms my point


Like every systems programming language with GC support.

Modula-2+, Modula-3, Nim, Mesa/Cedar, .NET Native, Sing#, Oberon, Oberon-2, Oberon-07, Active Oberon, Nim, Swift (RC is a GC algorithm per CS definition), and even if debatable what its role might be, Go.

The point I am making is having a GC managed heap doesn't preclude features for manual memory management, only GC haters think otherwise.


You don't understand the point, GC is one way of managing memory, it is not the panacea

Some helpful reading, from people actually working on solving hard problems

https://twitter.com/FilmicWorlds/status/1562090212225716224

> only GC haters think otherwise.

only GC lovers think like that

that's not a good argument


Random links about a game studio, incidently one famous for using Lisp based scripting languages before being acquired by Sony.... yeah right.

You still don't get GC enabled system programming languages, and apparently I am not the one that is going to make it clear, so whatever.


You still don't get it, and that's ok


Indeed, why bother isn't it.


RAII is often optional automatic memory management, like in C++ or Rust. Both of these languages are used for AAA games and rarely used for websites. My point was that if you don't go as far as those two, there is just not much of a point to not using C instead.


Rust is not used in the AAA industry, it was advertised by a company (Embark), but they ended up using Unreal Engine 5 for both of their new upcoming games


NOT TRUE you would of course choose Odin over C unless you could not for some reason.

Odin is 100x better environment to program in than C. C++ is used for AAA games. Rust is not really used if hardly at all because of the highly competitive and time critical nature and very much the need to change things fast.


I agree RAII at a minimum should be available, but why not just use c++ at that point? I'll just go to something like zig or rust that is established and has thousands of libraries available.


I recommend reading my reply to another comment regarding RAII: https://news.ycombinator.com/item?id=32629951

Also the article itself demonstrates different approaches to achieve behaviour similar to RAII through the `defer` statement and `deferred_*` attributes.


I'm happy to see Odin chose snake_case instead of camelCase. At one time Zig debated switching to snake case, but that ship has sailed [0], sadly.

It seems like a small bike-shed level comment, but when I consider the code I have left in me, I'd like it to look as nice as possible.

0 https://github.com/ziglang/zig/issues/1097


My fingers hate snake_case unless I'm in an IDE with working auto complete, and even then the repeated "shift-minus" when defining variables/functions is just hell on my typing speed. I don't have to remap my keyboard to avoid RSI when using camelCase, and I also don't have to switch keyboard apps on my phone when typing code examples.


I totally understand the preference, as using snake_case has grown on me. But, when you jump between many languages (with different case usage), you kind of have to be agnostic about such things.


I much prefer camelCase over snake_case. No idea why.


curious, why do you prefer snake_case to kebab-case?


I prefer kebab-case but it’s apparently too much to ask for in infix languages because most of them insist on allowing expressions like `a+b-c`.


Never use kebab-case. Not even where you can (e.g. filenames). Sooner or later you're going to want that name as an identifier in a programming language that doesn't support it (approximately all of them), and then you're going to have to come up with annoying rules to convert between them, and make all the code that uses it harder to follow and to grep.

Obvious example is CSS.


kebab-case conflicts with binary expressions, specifically subtraction? Some parser jujitsu would be required to fix this.


Generally, kebab-case means that operators must be separated by spaces. Languages like FORTH or LISP might not even treat operators as separate from ordinary functions.


Not with a parsing expression grammar! They make it easy to say that operators must be separated by spaces only when the alternative would be a valid identifier, and if you disallow trailing hyphens this matches most cases:

   a-variable; a-1, a-(expr()), 1-a-variable (don't do this)
I think the last one illustrates why this will never be popular.


> As someone who is interested in systems programming, and has experience working with Go

I wonder how Go got it's reputation as a systems language. Imo, it occupies the same abstraction level as Java or C#.


When Go was first announced ~2009, Rob Pike explicitly framed it as a systems language

https://www.youtube.com/watch?v=rKnDgT73v8s

I would not say it's the same as Java or C#. The crucial difference is that it compiles to a native executable binary file, not something that needs a virtual machine.


It is possible to compile Java and C# binaries. It is true that it is not as common and they were not designed that way, but I think this difference is mostly an implementation detail. I think these are indeed the best comparison languages to Go. They're all mature statically checked languages with good garbage collectors and concurrency features.


I think Rob Pike has changed his idea. I remember watching Youtube video featuring language designers(golang, d and rust), where Rob Pike, Andrei Alexandrescu and Steve Klabnik talked about the stories behind three languages. Rob Pike said something like, "Golang was mistakenly called system language, it should be called server programming language instead".

I tried to find the video on youtube, but to no avail.


C programs are executed inside the C virtual machine.


You mean the C abstract machine?


Can't tell if serious...


Either a reference to Abstract Machine from the standards, or something suggesting the processor is a C virtual machine.


Interesting article:

"C Is Not a Low-level Language" (https://queue.acm.org/detail.cfm?id=3212479)


I have read that.

While I feel that article is being too harsh on C, it would be interesting to have cache intrinsics just like we have SIMD intrinsics. I would imagine that would be more complex to implement, however.


The simple answer is that it was announced as one.

The more complicated answer is that, traditionally, systems programming languages were those which are not scripting or assembler languages. If you listen to Pike for more than a few seconds you'll soon notice that he's a bit of a language purist, so whatever modern interpretation you might have for a term is unlikely to match what he is communicating.

More specifically, it was announced as a systems programming language designed for things like web servers and systems of that nature, with more control than Java in some areas. Even with conflicting definitions of systems, there should have been no illusions about it being designed to be in roughly the same space as Java with that context. It was very much suggested from the onset that it was meant to compete with Java.

But, ultimately the game of telephone truncated the context that would have helped with finding the right definition of systems and, I expect exacerbate things, there was Rust coming onto the scene juicing the situation with its fans often claiming that "Go isn't a systems language, Rust is!" leaving some revisionism about people believing that Go was a systems language (in the modern sense).


Because languages like Oberon exist, used to write graphical workstations operating systems, and Go has the same set of features as Oberon.

Unless one doesn't consider writing compilers, linkers, GPU debuggers, container management, syscall emulators, unikernels systems programming.


>Unless one doesn't consider writing compilers, linkers, GPU debuggers, container management, syscall emulators, unikernels [to be] systems programming.

Since you can write the above with any language, perhaps even Python or Lua if you wanted to, it's not exactly the ability to write the above or the fact of having written the above in a language, that makes a language to be considered a "systems programming language".

In some way, what is called a systems languge it's not a technical capacity thing ("can do X, Y and Z, so it's a systems language").

It's a term applied deliberately, that also includes other aspects.


I challenge you to find me two concurring definitions of systems programming that also include enough detail to set clear bounds between what is and isn't systems programming. If you succeed at that, I further challenge you to: find me two concurring definitions of systems programming language that include enough detail to qualify and disqualify languages consistently.

By "enough detail" and "clear bounds" i mean: a group of people independently arriving at the same sets of classification based on your provided definitions.


>I challenge you to find me two concurring definitions of systems programming that also include enough detail to set clear bounds between what is and isn't systems programming.

There's no such thing - that was the whole point of my comment.

A systems language is not such because of conforming to a definition, it's a delibarate classification ("this language is, that one isn't" as opposed to "this langauge is because it conforms to this definition"). And that "is/isn't" isn't even up to the individual programmer, it's cultural.

There are characteristics that drive this classification, but it's not driven by a strict definition. It's more of "know it when I see it" kind of affair.


Pity that even ISO C fails at it, and needs language extensions to play game.


Sure one can play word games all day long if that is your point.


Civilization is all about word games, is my larger point, and those games are important business and what drive characterization - they're not something incosequential that we can "set aside and get to the real work".

Doubly so if what we're concerned about is labelling itself, like "what is and what isn't considered a systems language". Then we're in the word domain, not in the measurement domain.


Actions matter more than words, and in that field Go has already proven itself, regardless of what the priests of the holy church of system programming might preach at the pulpit.

In fact their holy C can only be used for the daily activities, when going outside the ISO C Bible, tainting itself with unholy compiler extensions.


AFAIK Go was and is primarily used to write back-end server software and this is commonly called "systems programming" (as opposed to application programming) too.

It is kinda confusing that the term is used for that and for low-level embedded/firmware/kernel programming, despite both areas having little in common with each other.


> and this is commonly called "systems programming"

Is it, though? My impression it only started with Go calling it that way


System programming languages also used to refer to non-scripting languages. See this paper by John Ousterhout (creator of Tcl) written in 1997: https://users.ece.utexas.edu/~adnan/top/ousterhout-scripting...


I don't think so. I think it's just the way Rob et al. used the term from their previous experience.

It's not meant to mean operating systems language, nor embedded systems language.

Rather for writing parts of a systems, such as servers. I would say that the definition is not that far away for Java or C#, but the expectation is simply that it would be "lower level" components of a system, including unix like utilities.


Not unreasonable to argue that something like Docker or k8s is more "system software" than an application.


I think the definition of "systems" changed over time. It used to mean building the foundations of operating systems, e.g. kernel and basic userland utilities.

Now it means building higher-level userland tools and internet oriented tools. I think the implicit consensus is the lower level stuff is a "solved problem".


I always interpreted go "system" as networked async systems and not electronic chip systems.


I think this makes sense considering that most of the cloud-native software is written with Go. (Docker / Podman, Kubernetes, Helm, Istio, Terraform, etc.)


I don't know if I'm crazy but it matches the definition of a system. Coupling different parts together. These are just high level systems, and unsurprisingly when people see 'system programming language' they want to see high level assembly like C; but it's misplaced reflex in this case.


I agree with the others. Go is its own abstraction level.

Not powerful enough to be compared to Java.

Probably closer to Node.js, that's to say, it is like JavaScript in the server. Judging by the developers who use it. Except it is compiled.


Maybe Go compiling to native binary with excellent performance and being somewhat "simple" (say, compared to Java 8 streams conveniences etc), has given rise to that impression. Though I wasn't under that impression myself (I think Go having a GC compared to usually more manual memory management by systems languages is a pretty significant difference).


Substack link if you are reading from mobile or otherwise don't have an adblocker:

https://hasen.substack.com/p/odin-praise


Drats, not the Norse deity. In case you're also disappointed, I recommend Votan by John James

https://books.google.com/books/about/Votan.html?id=y-OlAwAAQ...

"""

In the second century AD, a Greek nobleman is travelling and living abroad in Germany while carrying on an affair with a military man's wife. When discovered, he takes an emergency business trip to save his life and packs amongst his belongings certain items that lead the people he encounters to think him a Norse God, a fortuitous point of view which he does little to dispel. Forced to keep up the pretence of being a god while staying one step ahead of his lover's jealous husband, Photinus must juggle the severity of his situation with the enjoyment of being a God.

"""


Sounds like The Road to El Dorado :)


It's that sort of tale, the picaresque adventures of a likeable scoundrel (or two). The John James tale checks most features of the Odin story with explainable real-world incidents.

He wrote a couple of sequels that go on to Irish mythos, etc.


Nice, it also supports distinct types (Similar to Nim): https://odin-lang.org/docs/overview/#distinct-type


Odin is probably the most productive I’ve been in a language. I went from reading the docs to having a working interactive PBD particle sim in VR via OpenXR in around 48 hours wall clock time.

I am currently writing a rigid body engine in it. It is perfect for graphics/games programming.


The most important aspect of a system programming language, besides performance, is the readability of the code.

It will always be more difficult to read (and understand) a program than to write it, so, unless this is some throw away code, ease of reading should be the priority.

Unfortunately, the speed/ease of writing is often used instead.

Ease of reading means linear, boring, plain English with minimal use of arcane symbols or regex-like expressions.

In this regard, meta-programming is very often counter-productive, as it can introduce new idioms and constructs in the language that have to be understood before reading the program itself.

Overuse of C macros or C++ templates comes to mind, but I think that lack of readability was also a huge problem for LISP and Forth.


"I like Odin" - me too, just discovered Odin last week. The syntax is just about as perfect as a procedural language can get.

I don't do C professionally and really only used for standard uni classes for algos, data structures, system programming, OS, etc. But seeing Bill's streams highlighting the clarity of the Pascal pointer syntax has been eye opening. A C spiral seems so utterly unnecessary compared to how simple it is to read Odin left-to-right. It's good to have a physicist take on a programming language. Thanks for the hard work!


Odin is currently at the top of my Rust exit-ramp strategy (for personal work).

As I grow older, I find myself wanting to use languages that are more conservative in their designs. Back in my 20s, I was all about expressivity and meta-programming: Scheme, Smalltalk, OCaml, and Haskell were my drugs of choice. I looked down upon those simple peons who used pedestrian languages like Java or PHP and I definitely looked down on those neckbeards who hadn't left their C caverns and didn't know the greatness of closures, higher-kinded types, hygienic macros and all those features that real languages had.

That's how I came to learn about Rust: in the early 2010s, there was a post on Reddit about a new programming language implemented in OCaml. Even before Rust 0.1 was released I was a Rust enthusiast. And what a ride it's been! Rust transformed and evolved a lot since its first days, but what a language it ended up becoming! In a just a few years, Rust went from being a hobbyist toy to an industrial-strength tool used by large projects and corporations like Mozilla, Dropbox, Discord, Amazon, Facebook, and more. I've been using it professionally myself for the past 5 years and it's been (mostly) a great experience.

But as Rust and I continue to change, it seems that the paths of our lives are diverging. As I gather more gray hair, I don't have the energy, the time, or even the passion for learning and master exotic language features anymore. I now avoid a lot of the "cool toys" that I was so enamoured with in my 20s. My programming style is now much closer to what it was when I first learned to program in Turbo Pascal in the mid-90s: mostly functions, arrays, and structs. Rust on the other hand continues its quest to be an industrial language that has ivory tower creds: the community and the core team enthusiastically look forward to having more ways to abstract code, more ways to express constraints at the language level. I feel that before long, my values and Rust's will have grown so far apart that we'll have no choice by to break up our long relationship.

So I've been looking for what else I could use for personal projects. (I think and hope that I continue to use Rust for my professional career.) Odin is at the moment my clear favorite, with Zig slightly behind, and Nim after that. I find that Odin's design represents my own values about what software and a programming language ought to be like better than anything else out there at the moment. It's high-level, has out-of-the-box support for dynamic arrays and dictionaries (I think, the two most useful data structures), and the whole language is simple enough that I learned most of it in a weekend. It does feel like a step back from Rust in certain areas -- null pointers, using product types for returning errors, no ownership checking -- but I think that I'm at a point where I'm ready to forgive and deal with these issues in order to have a tool that is simpler and doesn't evolve at the speed of 20 year olds.


>As I grow older, I find myself wanting to use languages that are more conservative in their designs.

What about Hare? It's meant to be a modern boring (read simple and stable) language.


Sounds like you’ve just found a new way to look down on other people’s PL preferences. Nothing’s really changed.


I like how newer languages with a C style syntax have more readable type declaration mechanisms than C (some even Pascal-like to an extent).

But I am not sure what I would prefer in some cases when looking at these new languages.

For example doing some alloc/pointer stuff in Odin is like:

    ptr := new(int)    
    ptr^ = 123    
    x: int = ptr^    
    free(ptr)
And in Hare:

    let ptr: *int = alloc(123);    
    let x: int = *ptr;
    free(ptr);
Which approach do you prefer, and why?


https://www.bell-labs.com/usr/dmr/www/chist.html

From Dennis Richie:

> Declarations in C must be read in an `inside-out' style that many find difficult to grasp [Anderson 80]. Sethi [Sethi 81] observed that many of the nested declarations and expressions would become simpler if the indirection operator had been taken as a postfix operator instead of prefix, but by then it was too late to change.

Apart from that choice of syntax, both examples seen equivalent.


I prefer `^` for pointers because it's pointy, and having it after the value because it's less ambiguous when combined with subscripts and field accesses.


What happens when you mutate the target of a pointer to the same memory as the target of an "immutable" reference (eg. function parameters and union switches)?


If I understand correctly, procedure parameters are only immutable in the scope of the procedure because the compiler will decide wether to pass them by value or reference.

At the calling scope, the parameter passed is not immutable. You can pass a pointer too, if you want.

There's also this pattern:

    name := name
which redeclares name in the current scope and shadows the name that exists in the outer scope

    call :: proc(b: int) {
        fmt.println(b)
        // c := &b // illegal!
        // b = 10 // illegal!
        b := b
        b = 10;
        fmt.println(b)
    }


I read some time ago Odin's documentation, I was quite impressed. But I wonder if it shouldn't copy some Zig features: for example unsigned not wrapping by default are a good idea IMHO: this may detect errors in debug mode and may create more efficient code in release mode.

I've been programming in C or C++ since 1993 and I can count on one of my hand the number of time I've used unsigned wrapping voluntary (for clock management)..


Reading through Odin (very neat overall!) I am a bit confused on the additional boolean types. "bool" seems to be your standard boolean but then there are b8, b16, b32, and b64. Bitfields seem to be their own thing still so are these just boolean types with a wider backing for something like a data format that stores them that way?


There are a few reasons as to why there are different sized booleans in Odin.

One is dealing with foreign code. A lot of old C code used their own boolean type before it was standardized. And many people defaulted to typedeffing `int`. A good example of this is Win32's `BOOL` which is `int` sized, which would be backed by `b32` in Odin.

Another reason is that file formats may use different width booleans to a single byte.

Another reason is that things that are closer to the register-width are faster than byte-wide operations. So `b32` or `b64` even though it takes more memory up can be faster to deal with than `bool`/`b8`.

As for `bit_set`s in Odin, they are backed by integers and a brilliant solution to the problem of flags. They usually become a lot of people's favourite (but small) feature because of their ease of use and clarify of what they express.


Thanks, appreciate the detailed answer!


Wow. The comments in a lot of these threads look to be full of a lot of snark and anger towards other programming languages and people. I wonder if we're approaching another one of those periods again where people need to be at [not their computer] again.


I think the reason why he is liking Odin, as a person who used Go, is because it's arguably an offshoot language of Go. Odin (https://odin-lang.org/docs/overview/) has borrowed a lot from Go, which can be easily detected in various syntax and concepts.

Another language which is in both the C and Go alternative language category, is Vlang (https://github.com/vlang/v/blob/master/doc/docs.md). For anybody that has used Go, these are definitely languages to check out and would be easier to learn.


In terms of actual features though (ignoring trivial syntax), Odin resembles more of Jai. In fact, when I first found about it a few years earlier I though it was an open source clone of Jai.

Today many features have diverged since then, and Odin is ahead by many aspects (mainly that the compiler is open source, but also that it’s already being used in production via EmberGen). I still really want the full metaprogramming capabilities that Jai has though (which is lacking in Odin currently).


> which is lacking in Odin currently

gingerBill (the creator/designer of Odin) has been quite clear that Odin is done (in terms of language features) and that he does not intend to ever include support for "macros" or other advanced metaprogramming features.


I see it more as Odin has a very strong Go-like foundation, and then borrowed heavily from Jai too, in terms of syntax and various features. So that superficially, it looks like Jai. To the point, people could think it was a clone or fork.

Also, Jai is a lot more Go-like than many people realize. When Jai diverges from various C/C++ concepts and traditions, it can do so in Go-like ways.

> ...Odin is ahead by many aspects (mainly that the compiler is open source, but also that it’s already being used in production...

Totally agree with you here. It could be argued that Jai has fumbled its advantage or at least the window of opportunity to claim its more innovative or more game program friendly than other languages. Odin has pretty much covered that gap, so that most of what people liked about Jai, is in Odin and can be used today.


> To the point, people could think it was a clone or fork.

You can’t clone code from YouTube videos (practically speaking).


I wouldn't say that it has a very strong Go fondation because it seems that Odin doesn't have an equivalent to God's channels..


Well, such things can be subjective. Yes, Odin does not do concurrency and channels. This appears to be because Odin doesn't do automatic memory management, and has more limits on the language, in terms of focus. It can be argued that Odin adapted itself to being a strong Jai alternative and appealing to gaming, versus more general purpose.

My understanding is that Odin does not plan to ever attempt to put concurrency nor channels into the language. It looks like that some type of workaround, if it ever happens, will have to be implemented by a 3rd party library.

It should be added, Vlang (the other Go alternative mentioned) because it was designed for various memory management options (both automatic and manual), does do concurrency and channels (https://github.com/vlang/v/blob/master/doc/docs.md#concurren...). Vlang aims to be more general purpose versus more specific.


Odin has numerous concurrency primitives in the core library. And the synchronization primitives are based on a the modern construct of a `Futex`:

https://github.com/odin-lang/Odin/tree/master/core/sync https://github.com/odin-lang/Odin/tree/master/core/thread

We are planning on adding channels to the core library but we have not added them yet since there are quite a few different forms (SPSC, SPMC, MPSC, MPMC).

One thing that I would like to bring up is that Odin differs from Go in that it does not have `go`routines (green threads). These require automatic memory management and a huge runtime to make them possible. And you cannot easily add them after the fact to a language either without huge issues. Channels are very basic things but the thing that makes Go's concurrency powerful are the `go`routines, not the channels.


Thank you for the additional information and correction.


You can’t conclude that a language has taken direct inspiration from another just because they look similar.


https://odin-lang.org/docs/faq/#what-have-been-the-major-inf...

Fortunately there's a FAQ, and from that there are two Rob Pike languages (Newsqueak and Go) as well as a pair of Wirth languages (Pascal, Oberon-2).


That’s fair. That’s however not what the OP claimed, which is what I was responding to.


The creator of Odin has made it very clear he was inspired by and borrowed from both Go and Jai, which is perfectly acceptable. It's also very obvious, to any who have looked at the documentation or used the languages. The other poster already gave you a link, where Go is acknowledged. If you investigate further (Google), you will see that Jai inspired and was borrowed from too.

I suppose your argument is about Odin, being compared to Jai, but its very obvious they are in near categories. Pretty much whatever a person would have used Jai for, they can use Odin. Maybe you want to also watch the YouTube link of Jai versus Odin (https://youtu.be/M763xHjsPk4).

And, it might hurt some feelings, but I'm very understanding of people that rather use Odin than to wait around for Jai.


Does anyone know what influences were taken from Oberon-2 in Odin?


It is not hard to design a language which is a "better C than C" because C has obvious warts, the article mentions a few.

But the warts just aren't big enough to justify switching to a different language. That's why no "better C than C" has ever become really popular.


Simple is not easy. The devil is in the details.

Many people can try. It's not easy to do a good job at it.


The solution is a language that seamlessly interoperates with C, like Zig does (no FFI, high impedance match with C, cross-compiles C, exports to C ABI, transpiles C, compiles to C).


Odin Homepage: https://odin-lang.org/ (I was curious)

I can't see this being generally successful unless it can interoperate with existing C code without much effort. Update: It appears it can: https://odin-lang.org/docs/overview/#foreign-system

It apparently targets LLVM.


I recently wrote a bindings generator to Odin for my C libraries, and the FFI is very well thought out, down to defining things like linker dependencies in the code. For instance see here:

https://github.com/floooh/sokol-odin/blob/main/sokol/gfx/gfx...

The only minor downside (compared to Zig) is that Odin still requires a separate C/C++ toolchain to actually build the C dependencies. But I guess that's a typical 1st-world-problem ;)

(however AFAIK Odin's FFI system isn't in any way related or depending on LLVM).


When I'm evaluating an open-source project, I always check the quality of its commit messages. I don't care about following a specific format or something, all that matters is whether they are informative or not.

Sadly, Odin doesn't seem to pass the test. It's full of commits labeled with just "fix <thing>": https://github.com/odin-lang/Odin/commits/master

A few examples of projects with better commit messages:

Linux kernel: https://github.com/torvalds/linux/commits/master

gcc: https://github.com/gcc-mirror/gcc/commits/master

Perl: https://github.com/Perl/perl5/commits/blead

Wayland: https://github.com/wayland-project/wayland/commits/main


Hello, I am the creator of the Odin programming language, and main architect of the codebase too. Being full of commits labelled "Fix <thing>" is absolutely fine and absolutely clear with all of the context of how the codebase operates.

Many of the things are "Fix #NNN` which means fixing a specific GitHub issue which usually has more information to it or has comments in the code, etc. There are many "Fix typo(s)" commits because I (and others) make a lot of typos; these are usually trivial/single-line fixes. Then it comes to the rest of "Fix ..." commits which are pretty much 1-2 line fixes of minor bugs; with large bugs having multi-line commit messages and many with huge comments within the code to explain everything.

Your metric of the quality of commit messages is not a bad one depending on the codebase, but it cannot be used blindly without knowing how that codebase operates. Especially comparing a mostly centralized codebases (like Odin) to very GNU-style decentralized codebases (all of the examples you gave).


When I'm evaluating the quality of a nuclear plant, I focus on the shape and color of the bike shed.


Naturally, commit messages aren't the only criteria I'm using. But I wasted way too much time in my life on deciphering the purpose of 20+ year old commits to ignore this.


When I want to discard the opinions of HN commenters I always go for tired, worn-out references that I don’t even understand myself.

A commit message has a function which is directly applicable to the craft of programming and not something auxiliary like the aesthetic properties of a shed used by the bicycle commuters. Though in this case I would disagree with the GP since “Fix <thing>” might be a good enough commit message template.


How many developers are there? For single developer projects, commit messages have very little value compared to documentation and an extensive passing test suite.

As the project grows in contributors, the commit log gains value


>How many developers are there?

According to GitHub, there are 132 contributors. Probably most of them were one-off drive-by PRs, but nonetheless, it's definitely not a one person project anymore.


I think Odin is (mainly) written by a single dev. When I'm the solo dev on a project, I also don't put a lot of effort into commit messages.


Even if you're a solo dev, you will likely still have to debug your old code in the future. I'm sure most of HN is familiar with the Chesterton's fence. Proper commit messages aid you in understanding why the changes were made. This especially matters in complex codebases, like compilers.


How is telling you exactly what was fixed not informative? I don't understand what you want.


What was broken? How it was broken? Why it was fixed this way? That information is vital during "git blame".

Take a look at the commit histories of the other projects I've linked. Their commit messages are much more elaborate.


>What was broken?

<thing>


That’s a decent commit message style, actually. It would be bad if it was just “Fix”. :)


I tried it for some basic 3d opengl stuff. I like it but the documentation is lacking and the standard library is all over the place. Also there is only one real big project using it afaik (EmberGen), not enough to bother switching from C/C++.

But i'm interested to see how the project will continue. :)


Memory unsafety (by default without opt-in) is being treated like a feature in these newer languages.


I've replied to you before on this in other comments. (here is one such example: https://news.ycombinator.com/item?id=32629951)

Odin has numerous memory safety features enabled by default. Having manual memory management does not entail no "memory safety". You could easily have a language with GC or ARC and still have all of the unsafe features of C. Objective-C is a brilliant example of this.

Many memory safety can be achieved with many constructs such as:

* Bounds checking for array-like types

* Having no pointer arithmetic

* Having distinct typing (virtually no implicit type conversions)

* Having no array-to-pointer demotion like in C (combination of the above to problems)

* Having actually decent array types: fixed-length arrays, slices, dynamic arrays, #soa arrays, etc

* Have length-bound strings rather NUL-terminated strings

* `Maybe(^T)` type to allow for non-nil pointers to be explicitly checked

* Built-in discriminated `union`s (of which `Maybe` is just one of those)

* Virtual Memory Protects

* And many more!

Most of the unsafe features in C/C++ come from most of the above features, especially pointer-semantics and the lack of a decent array type.

What Odin does not offer is ownership semantics and lifetime semantics, which is an entire discuss to itself which I won't talk in this comment.


If Odin is memory-safe by default then my bad, I was wrong.


That's because memory safety has a price: either a GC (which creates interoperability issues if you have two languages with GCs..) or a complexity price like in Rust..

So these languages stay memory unsafe but tries to minimise the issues caused by the lack of safety.


That's not true. Memory safety could be free. Actually it would even outperform any memory unsafe language, when done right™.

The key would be hardware supported automatic memory management.

This tech exists since many years but nobody uses it (likely because our "good old friend": patents, which is a death sentence to any good idea).

https://researcher.watson.ibm.com/researcher/files/us-bacon/...

https://people.eecs.berkeley.edu/~kubitron/papers/holistic_r...

https://adept.eecs.berkeley.edu/wp-content/uploads/2018/06/A...


Making an effort in language design also has a price.


What other language takes this stance? (other than jai, which is not released)


Zig


> learning C++ is too hard, so we'll just write our own new programming language instead

Many such cases.


FWIW, the Odin compiler seems to be written in C++. Are there any plans to make it self hosted?


To quote the FAQ (https://odin-lang.org/docs/faq/#is-the-odin-compiler-self-ho...):

> Odin is not currently self hosted nor will be until after version 1.0 when the main implementation of the Odin compiler adheres to a specification and is heavily tested. In general, self hosting before a stable language and compiler exists is masturbatory pleasure.


Ok. Fair enough. I will definitely give Odin a try.


The creator is explicitly against a self-hosted compiler, for better or worse.


I'm not against self-hosting.

To quote the FAQ (https://odin-lang.org/docs/faq/#is-the-odin-compiler-self-ho...):

> Odin is not currently self hosted nor will be until after version 1.0 when the main implementation of the Odin compiler adheres to a specification and is heavily tested. In general, self hosting before a stable language and compiler exists is masturbatory pleasure.


Just a heads up: Be careful posting two nearly identical comments so close together. They both ended up marked "[dead]". HN's filters catch things like that and may auto-kill them.


Thank you for that. I rarely use HackerNews and I was just trying to clarify my own position.


i had to stop reading cause of all the ads ! Bloody hell !


Just use uBO like everyone else, there's not a single ad on that page visible for me.


Was reading on mobile, probably need to use some adblocker for mobile as well.


Firefox supports uBlock Origin on mobile.


Brave has one built in.


I hadn't realized all the ads until I tried to read it from my mobile.

Here's an alternative link (substack):

https://hasen.substack.com/p/odin-praise


Firefox mobile works with adblockers ..


I’d much rather use a safe language such as Rust that has the same speed without all the worries.

Probably a harder learning curve than Odin or Go but more likely to work and also run faster (than Go at least). It also has zero cost functional programming paradigms.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: