Hacker News new | past | comments | ask | show | jobs | submit login
Zig's (.{}){} Syntax (openmymind.net)
245 points by todsacerdoti 34 days ago | hide | past | favorite | 164 comments



Even more unfriendly-yet-typical line is where you create an allocator, and few lines further you run allocator() method on it, to get ...an allocator (but you had it already! Or maybe you didn't ?) Same: you create Writer, but then you run a writer() method on it.

Here is the code to illustrate:

    var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
    defer arena.deinit();
    var visited = std.BufSet.init(arena.allocator());

    var bw = std.io.bufferedWriter(std.io.getStdOut().writer());
    const stdout = bw.writer();
So.. what are the entities we use, conceptually? "allocatorButNotReally" and "thisTimeReallyAnAllocatorIPromise"? Same for the writer?

Plus, the documentation isn't much explaining wtf is this and why.

The answer is probably buried somewhere in forums history, blogs and IRC logs, because there must have been consensus established why is it ok to write code like that. But, the lack of clear explanation is not helping with casual contact with the language. It's rather all-or-nothing - either you spend a lot of time daily in tracking all the media about the ecosystem, or you just don't get the basics. Not good IMO. (and yes I like a lot about the language).


std.mem.Allocator is the allocator interface. For that struct to be considered an interface, it must not contain directly any specific concrete implementation as it needs to be "bound" to different implementations (GenealPurposeAllocator, ArenaAllocator, ...), which is done via pointers. An allocator implementation holds state and implements alloc, free and resize for its specific internal mechanisms, and then pointers to all these things are set into an instance of std.mem.Allocator when you call the `.allocator()` function on an instance of an allocator.

https://ziglang.org/documentation/master/std/#std.mem.Alloca...

File and Socket both offer a `.writer()` function to create a writer interface bound to a specific concrete "writeable stream".

BufferedWriter has both extra state (the buffer) and extra functions (flush) that must be part of a concrete implementation separate from the writer interface.

> The answer is probably buried somewhere in forums history

That's just how computers work, languages that don't expose these details do the same exact thing, they just hide it from you.


Hey Mr Loris.

Well, your explanation doesn't really tell why do I call .deinit() on a structure before alllocator() call and calling all the rest important stuff on a structure after such call. I think you guys, while doing great job by the way, are kind of stuck in a thinking from inside language creators' perspective. From outside, certain things look so weird.

I need also to be a picky about "that's just how computers work" phrase, you know uttering such a phrase has always a danger of bumping into someone who wrote assembly before you were even born and hearing this makes a good laugh..


> Well, your explanation doesn't really tell why do I call .deinit() on a structure before alllocator() call and calling all the rest important stuff on a structure after such call.

That's because the Allocator interface doesn't define that an allocator must be deinitable (see in the link above the fn pointers held by the vtable field). So just like you have to call flush() on a BufferedWriter implementation (because the Writer interface doesn't define that writers must be flushable), you have to call deinit on the implementation and not through the interface.

Fun fact, not all allocators are deinitable. For example std.heap.c_allocator is an interface to libc's malloc, and that allocator, while usable from Zig, doesn't have a concept of deiniting. Similarly, std.heap.page_allocator (mmap /virtualalloc) doesn't have any deinit because it's stateless (i.e. the kernel holds the state).


Ok that's very helpful, thank you.


I don't know about the deinit thing, but I think this allocator/writer stuff has nothing to do with "inside language creators' perspective". To me, even though it wouldn't be my first guess as someone who's never used Zig, it does make sense to me that it's done this way since apparently Zig does not really have interfaces or traits of any kind for structs to just have. In fact when Googling about Zig interfaces I found another post from the same blog:

https://www.openmymind.net/Zig-Interfaces/

which says that an interface is essentially just a struct that contains pointers to methods. In other words when you call the .thing() method on your SpecificThing, that method is producing a Thing that knows how to operate on the SpecificThing, because functions that accept Things don't know about SpecificThings. You can't manufacture that Thing without first having a SpecificThing, and a SpecificThing can't be directly used as that Thing because it's not. There's essentially no other way to do this in Zig.


> why do I call .deinit() on a structure before alllocator() call

This is explained right in the documentation about arena allocator. Arena allocator deallocate everything at once when it goes out of scope (with defer deinit()). You need to call .allocator() to get an Allocator struct because it's a pattern in Zig to swap out the allocator. And with this, other code can call alloc and free with out caring about the implementation.

This is just how arena allocator works and not related to Zig's design. You may take issue with how Zig doesn't have built-in interface and having to resort to this implementation struct returning the interface struct pattern, but I think the GP clearly explained the Why.


I think most languages hide this by default because 99% of people don’t have to deal with it.


Most languages handle this by having interfaces/traits/etc.

Zig doesn't have those, so you're forced to use these ad-hoc struct instances.


99% of people are either better off using a language with a garbage collector or need to avoid heap allocations altogether.


My questions and remarks had nothing to do with the way how you allocate memory in programs.


Yep. For some reason they want to make it boilerplate to write interfaces. Its so pointless


There are a few different ways of implementing interfaces, each with different tradeoffs, so it's not boilerplate :^)


I'm only familiar with the vtable approach, what other ways are there?


fieldparentptr.

The zig allocators used to use this because it enabled allocator interfaces without type erasure, but it was found to have a minor but real performance penalty as it is impossible for any compiler to optimize for this in scenarios that are useful for allocators.

Other interfaces might actually have the opposite performance preference


If you control every implementation (ie you aren’t writing a library where others will implement your interfaces), then tagged unions are a simple way to accomplish this. See the bottom of this page: https://www.openmymind.net/Zig-Interfaces/


It's not pointless, because you get to select exactly the design pattern that is best for the situation. Other languages may decide this for you.


IMHO the Zig stdlib (including the build system) by far isn't as elegantly designed as the language. There's more trial-and-error and adhoc-solutions going on in the stdlib and there are also obvious gaps and inconsistencies where the stdlib still tries to find its "style".

I think that can be expected of a pre-1.0 language ecosystem though. Currently it's more important to get the language right first and then worry about cleaning up the stdlib APIs.


All languages have these problems. Even Go with famously excellent std has many rough spots that either were not available (such as context) or was just a bit poorly designed.

The most important job of std is not (contrary to popular belief) to provide a “bag of useful high quality things” but rather providing interfaces and types that 3p packages can use without coordinating with each other. I’d argue that http.Handler, io.Reader/Writer/Closer are providing the most value and they are just single method signatures.

When there’s universal agreement of what shape different common “things” have, it unlocks interop which just turbo charges the whole ecosystem. Some of those are language, but a lot more is std and that’s why I always rant about people over focusing on languages.


Also tbf, even in its current state the Zig stdlib is already infinitely more useful than the C stdlib and even the C++ stdlib.


And will they ever do a revamp on the stdlib API or will stick with it because of backward compatibility concerns just like C/C++ does?


Zig is a 0.x language for a reason, and that reason is to not have to offer any kind of backward compatibility until main development is complete.


They already have had several cleanup-quakes in the stdlib.


This is a naming convention problem. In a certain other language that zig is trying hard to not become one of those things would be called an AllocatorFactory.



As people have pointed already elsewhere, the same declaration can be made more clear by isolating the type like so:

    var gpa: std.mem.GeneralPurposeAllocator(.{}) = .{};


Also note that on Zig master, initializing `GeneralPurposeAllocator` like this is deprecated -- it'll be removed after the next release. Instead, you should do this:

  var gpa: std.mem.GeneralPurposeAllocator(.{}) = .init;


Ah yes. Much more clear. Thank you.


I'm glad I read the last line, for those who may not have gotten that far: this is about to become a much less prevalent pattern in Zig code, replaced with declaration literals. The new syntax will look like this:

   var gpa: std.mem.GeneralPurposeAllocator(.{}) = .init;
Which finds the declaration literal `std.mem.GeneralPurposeAllocator.init`, a pre-declared instance of the GPA with the correct starting configuration.


After you've been writing zig for a while, seeing `.{}` in an argument list intuitively means "default arguments".


Seems like it could just be elided entirely. Why can't it be?


I do not know Zig, but it looks like it just means "call default constructor for parameter/variable/type". I do not see how you could expect it to be elided unless every function auto-constructs any elided arguments or always has default arguments.

In other words, for a function f(x : T), f(.{}) is f(T()), not f(), where T() is the default constructor for type T.

If we had a function with two parameters g(x : T, y : T2) it would be g(.{}, .{}) which means g(T(), T2()), not g().

It looks like the feature exists to avoid things like:

x : really_long_type = really_long_type(), which can be replaced with x : T = .{} to avoid unnecessary duplication.


I do not know Zig either; I had assumed that it has default parameters, but it seems that it does not[0]. So, yes, it makes sense now why it cannot be elided.

They should add default parameters to avoid this sort of thing. Maybe they ought to consider named/labelled parameters, too, if they're so concerned about clarity.

0: https://github.com/ziglang/zig/issues/484


Zig believes that all code needs to be explicit, to prevent surprises- You never want code that "just executes on its own" in places you may not expect it. Therefore, if you want default arguments, you have to perform some action to indicate this.


Except it's not entirely explicit. It allows the type name of the object being constructed to be elided.

Per the article, this is the explicit form:

    var gpa = std.heap.GeneralPurposeAllocator(std.heap.GeneralPurposeAllocatorConfig{}){};


I don’t think type elision make the codes execution less explicit. Nothing else could go there


That's a textbook definition of "implicit", as in not directly specified, but assumed.

The fact that unacceptable parameter would fail compile time validation does not make it any more readable.


Consider this:

    var foo = OpaqueTypeName(.{}){};
What is the . eliding?

You don't know. I don't know. It's impossible to tell because the type is opaque to our understanding.


i don't get this argument. what is code that "just executes on its own"? how is it more difficult to differentiate what a function does with vs without arguments compared to one that takes arguments with values vs arguments without values?


explicit about branching and allocations, not so for types. we've recently got .decl() syntax, which is even more implicit than .{}


Declaring a variable doesn't initialize it in Zig, so maybe the correct semantics in ellisions would be to allocate an unitialized argument.


For the same reason you can't pass a Python function expecting a list an empty list with foo(), you have to use foo([]). They mean different things.


However, in Python, if you routinely call foo([]), you'd specify that (or rather an empty tuple since it's immutable) as the default value for that argument.


I believe that if most foo's users should just call it with [], the Pythonic way is to make the argument optional.


Well yes, but if it's someone else's library, realistically you're not going to change it.

Zig is a static language without variadic parameters, so you can't make it optional in that sense. You could make the options a `?T` and pass `null` instead, but it isn't idiomatic, because passing `.{}` to a parameter expecting a `T` will fill in all the default values for you.


This doesn’t answer the question why Zig doesn’t have default argument values.


Default argument variables create variadic functions.

Arity N when you supply a value

Arity N-1 when you use the default


How does this create variadic functions? The arity is the same, since the function signature defines the exact amount of arguments. The compiler just passes the omitted ones for you.


Okay, but why could a static language not have variadic functions?


That's their design choice.

I can think of a few reasons

- makes function calling simpler

- faster compilation

- less symbol mangling

- makes grepping for function implementation easier

If for some reason you think you absolutely can't live without variadic functions, maybe don't use zig.


i have never used zig before, but after reading the article i came to the same conclusion. the "problem" (if it is a problem at all, that is) really is that .{} is the syntax for a struct whose type is to be figured out by the compiler, that new users will be unfamiliar with.

i don't know if there are other uses for . and {} that would make this hard to read. if there are, then maybe that's an issue, but otherwise, i don't see that as a problem. it's something to learn.

ideally, each syntax element has only one obvious use. that's not always possible, but as long as the right meaning can easily be inferred from the context, then that's good enough for most cases.


One will quickly become accustomed to the .{} pattern learning Zig. It's used for struct constructors:

  const a_struct: StructType = .{ .foo = "baz"}; 
As well as union initialization. So .{} is a struct constructor for a struct where every field has a default value, which chooses all of those fields.


I think it becomes clearer when considering that these are all equivalent:

    const a_struct: StructType = StructType{ .foo = "baz" };

    const a_struct = StructType{ .foo = "baz" };

    const a_struct: StructType = .{ .foo = "baz" };


> that new users will be unfamiliar with

It's really not much different than in nearly any other language with type inference except for the dot (which is just a placeholder for an inferred type name).


fugly syntax is one of the biggest reasons i will never touch rust. zig is not far too off, unfortunately. i i needed non-gc language, i would go for odin. not perfect but closes to usable. it's just too hard to do anything but Go, once you get comfortable with it. they got too many things right to see grass being greener elsewhere.


Rust encodes far more information into source code than most languages, so it simply needs more syntax. I wouldn't say it's ugly (except macros, not sure what they were thinking there), there's just more of it.

Obviously if you remove lifetimes, types, references, etc. you're going to need less syntax.


> Rust encodes far more information into source code than most languages, so it simply needs more syntax.

I don't think this is the case. Firstly, all the necessary data can be encoded by keywords, spaces and newlines. Forth or TCL can encode everything Rust can (since their interpreters are 100% configurable), only with keywords, and spaces between. A language should have special syntax for only the important part, not for everything.

Secondly, even though Rust has special syntax for a lot of stuff, it could be nicer to the eye.

For example, RattleSnake here https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html or "Haskell flavored Rust" here https://news.ycombinator.com/item?id=34541695#34543124 are much nicer to the eye.

But if you need a bunch of stuff that may or may not apply to a function or type definition for example, then why not just use CSS/Rebol style syntax, and put all your keywords that apply in a row. No need for all the weird symbols, brackets, colons and all that. You could even use keyword=no, and be extra explicit.


Did you actually read that Rattlesnake post? It's making exactly the same point I was.

Also IMO the Rattlesnake example looks awful. The Haskell flavoured Rust is even worse. Do you seriously prefer those? If so I'm afraid your sense of taste is a bit suss.


just because there are more keywords/syntax, it does not necessarily mean it has to be ugly. they could have made better decisions when designing the language.


That “.” substitution of an inferred type is going to fire back. I really appreciate when code has one simple property: you search a type by name and you get all of the places where it’s constructed. Makes it easy to refactor the code and reason about it with local context. It’s the case with Rust, but not C++ or Zig.


Any IDE worth its salt will let you search a type by name and get all the places where it's referenced, regardless of type inference.


A language that promotes itself as simple and with no hidden control flow etc shouldn't need an IDE to find hidden things imho.

But that kind of shortcut seems to be optional.


"No hidden control flow" is completely orthogonal to "no implicit typing". I think anyone looking at Zig would immediately recognize that it is firmly in the type inference camp by choice.

As far as simplicity, I think their pitch is "simpler than Rust", not in absolute terms. The whole comptime thing is hardly simple in general.


I think it is simple, but not easy to grasp. I might be quibbling over words, but these things are not quite the same in my eyes.

  simple <-> complex
    easy <-> difficult


I know this gets shared all the time, but in case anybody in this thread hasn’t seen the rich hickey talk: https://youtu.be/SxdOUGdseq4?si=3sa6JRg6Ei1Cf_Wl


I am not a big Zig aficionado but I definitely contrast it in my mind moreso with C and C++ rather than Rust. It definitely aims at being a “better C” sort of language moreso than a “better C++” which Rust seems to be focusing on.


Their pitch is "A Simple Language" as seen on the website.


This doesn't cover every use case (e.g, reviewing a PR and just trying to - you know - read the PR).


Better than that would be a language that doesn't require / almost compel users (by "almost compel", I mean the user community, obviously, not the language literally, since it is not sentient) to use an IDE in order to use the language, and using which (language) you can still do what you said above, by just using a text editor.

In the same vein as what you said here about orthogonality ( https://news.ycombinator.com/item?id=42097347 ), programming languages and IDEs should be orthogonal (and actually are, unless deliberately linked). People were using languages much before IDEs existed. And they got a hell of a lot done using the primitive surrounding tools that existed back then, including, you know, gems like Lisp and the concepts embodied in it, many of which have, much later, been adopted by many modern languages.

And I still meant "almost compel", even by the community, because of course they cannot really compel you. I meant it in the sense of, for example, so many people using VS Code for programming posts.


> Better than that would be a language that doesn't require / almost compel users (by "almost compel", I mean the user community, obviously, not the language literally, since it is not sentient) to use an IDE in order to use the language, and using which (language) you can still do what you said above, by just using a text editor.

It's ironic that you complain about this because Zig is probably the most "normal editor" friendly programming language for exactly the kind of thing mentioned in the article.

I don't need an IDE to figure out the 12 options to that function and fill them out with the correct defaults. I don't have to hunt through 23 layers of mysterious header files to find the declaration I need to figure everything out. etc.

Just try figuring out a foo(12).bar(14).baz("HELP!").fixme("ARRGH!") construction chain in C++ or Rust without an IDE. Oof.

1) Zig doesn't encourage those and 2) in Zig I can trace the @import() calls and actually run "grep" on things.


>It's ironic that you complain about this because Zig is probably the most "normal editor" friendly programming language for exactly the kind of thing mentioned in the article.

echo Who complained, $(echo bsder | sed 's/sd/ro/') ? ;)

Not me. Don't put words into my mouth.

(I don't care if I got the above shell syntax wrong), this was just a quickie, for fun ;)

you seem to have misunderstood my words, in the exact opposite way from what I meant. congrats. not!

>I don't need an IDE

who told you that I needed an IDE?

chill, willya?

and, wow:

>to figure out the 12 options to that function and fill them out with the correct defaults. I don't have to hunt through 23 layers of mysterious header files to find the declaration I need to figure everything out. etc.

12 and 23, exaggerating much? we are not talking about win32 API functions, podner.

>Just try figuring out a foo(12).bar(14).baz("HELP!").fixme("ARRGH!") construction chain in C++ or Rust without an IDE. Oof.

Don't resort to theatrics or histrionics to make your point (like HELP! and ARRGH!), (I am allowed to, tho, because i > u :)

>1) Zig doesn't encourage those and 2) in Zig I can trace the @import() calls and actually run "grep" on things.

faaakkk!

though a bsder, you find header files mysterious, and cannot grep through them, if they are in C++ or Rust, eh? are find and xargs your enemies? or even just ls - R | grep ?

stopped editing, even though there might be a few minor typos.

now, fire back! :)


Does Zig have an IDE worth its salt?


Zig does not have an IDE, but it does have a language server called zls[0] that I have found to be pretty decent after implementing Loris' suggestion from this post[1].

[0] - https://github.com/zigtools/zls

[1] - https://kristoff.it/blog/improving-your-zls-experience/


I'm using InteliJ with a Zig plugin and finding it quite nice. But I'm a Zig noob


An easy way to find all places is to temporarily add a new struct member without defaults, run the compiler and let it complain of all the places where it is being instanced.

Similar to when you add a new enum member and it complains of all switch statements that are not using it (as long as you didn't add a default case).


This is tedious in Rust when initializing a struct which has nested structs. A language which has type inference at all should at least be consistent about it and allow to not mention the type when it can be inferred by the compiler.


What's meaningfully different in Rust's type inference. E.g.:

  fn example() {
      let p = returns_a_point_type(args);
  }
Where create_point() is a function from a module (e.g. not even defined in that file) which returns the Point type automatically inferred for p? I mean sure, it's technically constructed in the called function... but is that often a useful distinction in context of trying to find all of the places new instances of types are being assigned? In any case, this is something the IDE should be more than capable of making easier for you than manually finding them anyways.


GP is talking about how easy it is to find places where the type is instantiated. Seems to me that create_point() will have one such site. And then it’s trivial to find callsites of create_point() with the LSP/IDE. What’s the issue?


The IDE can find all places new variables are assigned to the type (regardless of whether it's direct instantiation, return value, inferred, or whatever way it comes about) so what's the special value of being able to manually find only the local instantiations find ctrl+f if you'd still need to manually track down the rest of the paths anyways?


I'm actually more puzzled about the infinite recursion in this type function:

  fn Node(T: type) type {
      return struct {
          value: T,
          next: ?*Node(T) = null,
      };
  }
In other languages, defining types in terms of themselves is unproblematic, because the type identifier is just a symbol and the whole thing amounts to a graph with a backreference.

However, here it's supposed to represent actual executable code, which is run by the compiler and "produces" a type in the end. But how would the compiler execute this function without getting stuck in a loop?


That seems wrong? For exactly the reason you say. The correct code, I would guess, should be `@This()`

However, I also wouldn't be surprised if somehow the memoization in the zig compiler makes this ok


No, it's fine because the 'next' struct member is just a pointer which has a known representation.


Ah, so the * type function can "lazily evaluate" its argument?


It's not actually about `*` -- for instance, declaring `const T = *T;` emits an error. The thing that makes this okay is that field types (for structs and unions) are evaluated in the "lazy" way you describe.


Ah, that makes sense. Thank you!


Using parens to pass type arguments was one of the things that turned me off on Zig. For a language that prioritizes "no hidden control flow," it sure did a lot to make various syntax conventions _masquerade_ as control flow instead.


> Using parens to pass type arguments was one of the things that turned me off on Zig.

It's just regular comptime function calls which happen to have parameters or return values which are comptime type values (types are comptime values in Zig). I find that a lot more elegant then inventing a separate syntax for generics, and it lets you do things trivially which are complex in other languages (like incrementally building complex types with regular Zig code).

It might be unusual when coming from C++, Rust or Typescript, but it feels 'natural' pretty much immediately after writing a few lines of generic code.


What do you mean? It is control flow. Generic functions are just higher-order functions that get evaluated at compile time.


It is an interesting question of definitions. Is control flow only at runtime? Is `#if` control flow in C?

If I had to define it, I would go with runtime-only, but I could see the other way too.


Macros can have control flow, so compile-time control flow is definitely possible, but perhaps we trained ourselves to not think of control flow in this way because using complicated compile-time logic is generally frowned upon as a footgun.

Perhaps Zig is the language that on purpose blurs the line between what runs when (by basically having what looks like macros integrated into runtime code without any conspicuous hashtaggy syntax), and so a Ziggy would not see compile-time control flow as something weird.


Zig’s comptime is just code that runs at compile time. Unless we have another term, we must call it control flow


[flagged]


I don't get it. Their reply looks normal to me.

Is it because they disagree with you?

I take these instances as learning opportunities and that makes me want to comment more, not less.


Like what?


Why did they keep the dot in Struct initialisation? Why not the syntax of just using without dot: const c1 = Config{ port = 8000, host = "127.0.0.1", }; Is there some other use with dotless one?


Just `{}` means a code block; in Zig you could do something like

  const c = blk: { const x = 5; break :blk x-3; }; // c = 2
just having an empty block `{}` is exactly that—an empty block of type `void`. having a dot or something else distinguishing it from a block is necessary in order for it to not be that.


In many situations the compiler can infer the type:

    fn my_func(): MyType {
        return .{ ... };
    }
The dot is just the placeholder for the inferred type and the above is equivalent with:

    fn my_func(): MyType {
        return MyType{ ... };
    }
...and Zig allows to write that verbose form too if you prefer that.


Because your type name might be std.foo.bar.baz.quux.Config


A whole lot of cleverness for a language that refuses to compile when you have unused parameters.


I am not a fan of zig, but I am a fan of discipline so I like this particular design decision.


I would be fine with it if it only threw an error about that when building in release mode or if there was a flag to silence it temporarily.

But while trying out some things and learning the language I find it annoying. And I don't know how it makes me more disciplined when I can just write `_ = unused;` to suppress the error. Even worse, if I forget that assignment in the code the compiler will never warn me about it again even when I want it to.

So far I haven't seen any upside to this.


Just use an editor with language server support and you don't need to worry about adding or removing the `_ = unused; // autofix`.

I wrote a 16kloc Zig project a couple of months ago, and not once the 'unused variables are errors' thing was annoying (https://floooh.github.io/2024/08/24/zig-and-emulators.html)

IME unused variables being (linter) errors also isn't all that unusual in the Javascript/Typescript world. It feels 'normal' fairly quickly.


Or the compiler could just add a flag and not make assumptions about my setup or workflow. Also linters are optional and under my control. Meanwhile the Zig compiler is forcing that onto me, and for what benefit exactly?

> I wrote a 16kloc Zig project a couple of months ago, and not once the 'unused variables are errors' thing was annoying

That's great, but different people are different. I've tried learning Zig twice by now but this is just a dealbreaker, simple as that.


"just"


The way I deal with this in my language, which also bans unused variables, is simple: I delete the unused variable or I use it.

My workflow is probably very different from yours I'm guessing. I have my editor configured to save on every keystroke and I have a watch process that then recompiles my code. I pretty much never manually compile. My compiler is sufficiently fast that I almost never wait more than 1 second to build and run my code. I notice every error immediately and fix them as they arise. This is what I am talking about with discipline. I never allow my code to get into an unexpectedly broken state and don't need a linter since I just make the compiler as strict as I would make the linter. This ultimately simplifies both the compiler and the build pipeline.

These are all huge upsides for me. The cost of occasionally deleting a definition and then restoring it are for me minor compared to the cost of, say, writing a separate linter or adding feature flags to the compiler (the latter of which doesn't fit into my workflow anyway since I auto compile).


The problem is that in order to delete an unused variable, you may need to delete a huge chunk of code which would be useful later when you want to actually use the variable.


can you give an example please? i can't imagine how any section of code would be affected by removing an unused variable. if the code reference the variable, it would be used. if it doesn't, then why would you have to delete it?


In pseudocode:

  a = input()
  b = f(a)
  c = g(b)
  d = h(c)
If you delete the unused variable d, then c will be unused, so you’ll have to delete it too. Iterating this, you will end up deleting the entire code.


You could:

- comment it out

- use git

- use ctrl-z


Or alternatively the compiler could just not force me to do any of those things.


Having to do that after every small change really breaks the flow.


Yeah ok. Overstated. in reality this will be at most 1/20th if your changes.

Maybe 1/6 if you're debugging.


I feel like this is like saying “python shouldn’t care so much about indentation, I’m just trying to learn!”


Wrong syntax is (and must be) an error. Totally different. The Problem in Go and Zig is that they put theory over practice: No compiler warnings is a good idea in theory, but fails in practice for things like unused variables or unused imports. Defending that makes it even worse and begs the question what other treasures they have burried in their language design. This thread is a testament to that.


Forbidding unused imports was a direct response to the practical difficulty of compiling google-scale C++ binaries: https://go.dev/talks/2012/splash.article#TOC_5.

In theory, programmers can just be disciplined enough or set up CI lints for unused imports. In practice…


I'm sure the rest of us all benefit from arcane doctrine required to scale up a 25,000 engineer, 40,000 commit/day monorepo.


You do. The zig compiler and stdlib can iterate faster across a team of mostly by numbers volunteer developers with varying degrees of skill and across global timezones because of the discipline that the compiler imposes.


This is nonsense argument, because there are more pragmatic solutions: Turn warnings into errors for release builds, or if there is only one build type, have a policy that requires developers to remove all warnings before committing code.


i prefer a language that doesn't even need to declare imports. why can't the compiler figure them out on its own?


I think you're being sarcastic, but ambiguity is the obvious answer. Your IDE can help you resolve these though.


i am absolutely serious. pike for example does not need imports. if a reference is not found in the local namespace the compiler will find it in the module path and resolve it by itself. there is no ambiguity because each module has a unique name.

we accept type inference but don't do the same for module references? why?

pike does have an import statement, but its effect is to include all members of a module into the namespace instead of just resolving the ones that are really used. and instead of speeding things up, using import on modules with lots of members may actually slow things down because the namespace is loaded up with lots of unused references. sometimes import is used to help readability, but that's rarely needed because you can solve the same with a simple assignment to a variable.

if you can show me an example where import resolves an ambiguity, i'll try to show how pike solves the problem without import.


I don't know how it works in Zig. In JavaScript, you can have lots of things with the same name, so you need to explicitly import them and you can give them an alias at the same time if there's a clash. I believe Python is the same.

In C++, you have to #include the thing you want to use somewhere, not necessarily in the same file you use it, it just has to end up in the same compilation unit. If two files define the same name, you'll end up with a compilation error. In very large projects, sometimes you won't notice this until some transitive dependency includes the other thing.

I'm personally a fan of explicit imports. I imagine this helps IDEs resolve references without having to scan 1000s of files to resolve them, and it helps build tools pull in only the needed files. Regarding speed (of execution), in JS we have tree-shaking so if you import a file but don't use all of its members, those excess/used members will be dropped from the final bundle (saving on both bundle size and run-time parsing). Plus it means I don't have to spend much time thinking of a super/globally unique name for every thing I define.


in python every module has a unique name:

    import math
    foo = math.pi
the compiler can obviously find math in the import statement. why can't it find math.pi directly?

pike can.


If you use fully qualified statements everywhere, sure. That means writing `datetime.datetime.now()` everywhere instead of `from datetime import datetime` and then just doing `datetime.now()`. But then you'll tell me, just create an alias, `dt = datetime.datetime`. Sure, I guess, but now you've made `datetime` some kind of superglobal so you can't use that as a variable anywhere.

And how does this work in practice? In Python and JS you can also put executable code inside your modules that gets executed the first time it's imported. Are you telling me that that's going to run the first time it's implicitly imported instead? Is that a good idea?

The story in JS gets even crazier because you can do all kinds of whacky things with imports, like hooking up "loaders" so you can import things that aren't even JavaScript (images, CSS, you name it), or you can modify the resolution algorithm.


but now you've made `datetime` some kind of superglobal so you can't use that as a variable anywhere

depends on the language, in pike, and as far as i know in python i still can use it as a variable if i want to, it would just cover the module and make the real datetime inaccessible. but why would i want to do that? if i see someone using a well known module name as a variable in python i would probably recommend changing it.

i don't see the benefit of not filling the global namespace over making import unneeded. add to that, by convention in pike module names start with an uppercase letter, and variables don't, so the overlap is going to be extremely small and never causes any issues.

In Python and JS you can also put executable code inside your modules that gets executed the first time it's imported

pike doesn't have that feature. if there is something to be initialized you'd have to call it explicitly. i don't see a problem with that, because in python you call import explicitly too. so you just swap out one need to be explicit for another. i prefer the pike way because it actually gives me more control over if and when that initialization happens.

i think that also better fits the paradigm of explicit is better than implicit. in python i have to learn which module does initialize stuff or ignore it, in pike i can easily see it from the way it is used.

further in pike to get an initialization i would normally create an instance of a class in the module because modules are singletons, and you probably don't want to change something in them globally.

going to run the first time it's implicitly imported instead

pike is compiled, that is, all these references are resolved first, before any code is run. so even if there were any implicit initialization it would be possible to run it first.

more specifically, pike modules are instantiated objects. i don't know the internals, but i believe they get first instantiated when they are resolved. again, that's before the rest of the code where the reference came from is running


I'm not sure I see the total difference between matching parens and matching defs and refs.

Sure, saying "an open paren must have a matching close" is quantitatively different from "a def must have at least one matching ref", but is it really qualitatively different?


A language that allows you to arbitrarily omit parentheses would be impossible to parse. That’s not the case for unused variables.


Not impossible: some lisps allowed arbitrary omission of close parens (upon finding a closing square bracket).

https://news.ycombinator.com/item?id=29093339

http://arclanguage.org/item?id=20979


HTML allows to omit closing tags for <p>, <li> and others


You are comparing invalid syntax to a purely cosmetic temporary non-issue.


That's a very different thing and you know it. Indentation is syntax. You can't just omit braces and expect it to parse.


Zig treats unused variables as a syntax error.


Lets say I gather Diag data, which I conditionally print during testing. Are you saying that I cannot leave the diag code in place after I comment out the print function? Thats unproductive and a major obstacle to using Zig. I’m still pissed at Andrews stance of preventing Tabs, operator overloading, polymorphism, and this just seals my “stay away” stance. I really do want to like Zig, but cannot.


You don't need to comment out the print function - it could gate its behavior on a comptime-known configuration variable. This would allow you to keep your debug variables in place.


It doesn't need to though. It goes out of its way to determine that the variable is unused.


if you are a fan, you cannot be disciplined, because that is a contradiction in terms.

fans are indisciplined. ;)


If you're a fan of discipline then you could also just call lint (or equivalent) before compiling.


doesn't help me when i have to deal with other peoples code. a language that enforces discipline by itself tends to be easier to read.


But what is the next step? A compiler that complains when you multiply by constant one?


anything a linter can do, can be included in a compiler. or the linter can be part of the compilation process by default. iaw instead of being optional it should be required maybe with a special opt-out, but opt-out should be frowned upon.


With the Zig language server it’s not terribly annoying:

_ = foo; // autofix


That's one if the great advantages of Zig. Other languages can't always enforce this rule (because of inheritance and such) and will generate strong warnings instead.

If you like copying dead memory around, you can always do `_ = unusedParam` to confirm to the compiler that you don't need that variable after all, despite going out of yiur way to declare it.


If you want to see true cleverness just go see the last devlog on the zig website.


I think this is the post parent is referencing:

https://ziglang.org/devlog/2024/#2024-11-04

It seems like an interesting idea, but I wish Andrew spent more time fleshing it out with complete examples. I can't tell if the _ characters are eliding values or if that's literally what's in his code.


It's Zig's equivalent of the newtype idiom: https://doc.rust-lang.org/rust-by-example/generics/new_types... for integers.

The underscores mean that it's a non-exhaustive enum. An exhaustive enum is where you list all the names of the possible enum values, and other values are illegal. A non-exhaustive enum means any value in the underlying storage is allowed. So at root this code is creating a bunch of new integer types which are all backed by u32 but which can't be directly assigned or compared to each other. That means you can't accidentally pass a SectionIndex into a function expecting an ObjectFunctionImportIndex, which would be impossible to do if those functions just took raw u32's.


Ah, thanks! I tried converting that example to Zig:

https://tars.run/t3eInpPFAgc

Is that the idea?

You can do the same thing wrapping integers with structs, but enum makes it slightly more concise?


Yes, that's how it works.

I think in Zig for new types you'd use enums for ints and packed structs for more complex types.


It's an interesting pattern, but it's a shame there's no way to efficiently use one of those OptionalXIndex with zig's actual null syntax, `?` and `orelse` and etc. It would be smoother if you could constraint the nonexhaustive range, and let the compiler use an unused value as the niche for null. Maybe something like `enum(u32) { _ = 1...std.math.maxInt(u32) }`


There's an issue tracking that: https://github.com/ziglang/zig/issues/3806


That's just newtype.


And has no multi-line comments.


The article spends a lot of time justifying a syntax that really just papers over Zig's lack of parameter pack support. The same pattern in Rust would just use variadic templates/generics.


> The same pattern in Rust would just use variadic templates/generics.

Are you sure Rust has variadic generics? Last I checked, the extent of progress was a draft RFC. https://github.com/rust-lang/rfcs/issues/376


To be super clear: it does not.


> The same pattern in Rust would just use variadic templates/generics.

Man, must be nice to be a time traveler from the 2030s. How does Covid25 turn out?


This comment might be valuable if you added some elaboration, and an example of what it looks like.


It's amazing they couldn't figure out how to get f(.{}) down to just f({}). Like here is this brace enclosed thing being matched against the argument type.


Those mean different things.

f(.{}) is calling a function, which takes a struct, using an in-place constructor which provides the default arguments for the necessary struct type.

f({}) is passing an instance of `void` to a function with a `void` type parameter. Do you need to do this, specifically? No. But you can.

  fn voidPass(v: void) void {
      _ = v;
  }
  
  test "void pass" {
      voidPass({});
  }


You do need to pass a void value if you use a hashmap as a set (eg StringHashMap(void))

Adding an entry requires you to pass a void value:

    map.put("hi", {});


That's how it is in c++ so I'm used to it now, but I don't think I mind the dot. It differentiates a block scope from a struct initializer.


In C's grammar you would need to wrap the {} as ({}) in order to get into a block expression scope from an argument list.


That's GNU C, not standard C.

Standard C has no such feature as blocks being expressions.

Compound literals are grotesque. The braces have to be proceeded by cast syntax indicating the type. It could be subject to inference. Maybe the current draft has something.


I actually find compound literals quite pleasing, visually.

Parentheses, brackets and braces I find can help a lot in guiding your eyes though code. They make it more „regular“, or lessen the „entropy“ (physicists, please don‘t stone me).

For this reason I‘ve come to dislike Rust’s braceless if-syntax, after being a die-hard fan. With parentheses it just reads better.


C99 has compound literals which allows you to call functions that take struct values or pointers to structs like this:

    my_func((my_type){ .x = 1, .y = 2 });

    my_func(&(my_type){ .x = 1, .y = 2 });


Yeah, they designed a whole language, but they surely couldn't remove that damned dot.

Every single time, I am telling you, every single time I must write an anonymous struct using that dot, I am becoming totally confused!

Me too buddy, me too!

Edit: I should put an /s here, am I right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: