I wish more programming languages provided interfaces for building libraries around the std library like what Go does. It makes using libraries a lot better because you aren't dependent on that library as long as it uses the std library interface.
This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You dont have to change your code.
In rust for example you have to go out and pick a database driver and no two libraries will work the same. If you pick one postgres library and it becomes outdated you have to go rewrite you code to support the next one to move too.
This is why I would never use Rust, or Zig for being things like http servers.
> I wish more programming languages provided interfaces for building libraries around the std library like what Go does. It makes using libraries a lot better because you aren't dependent on that library as long as it uses the std library interface.
> This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You dont have to change your code.
The effect is that every database driver becomes outdated and doesn't support certain features, instead of just a few. Python people have a saying that the standard library is where modules go to die. Java's database drivers massively lag behind (e.g. there's still no good async support for most of them), because JDBC is a lowest common denominator that every driver has to be dumbed down to, but is established enough that it sucks all the oxygen away from any efforts to write better drivers.
Far better to keep the standard library small and allow modules to update on their own terms and their own schedule. If the library you're using does what you need, you can stick with it, but a better alternative (which always means a new interface in practice - the idea that you can make substantial improvement without changing the interface is mostly a myth, because the interface is the most important part of a library) can appear and compete on its merits, and if the improvements are worth migrating to then it will win out.
> Very Hard Disagree. JDBC is pretty amazing actually. Thats why you find so many Database tools and wizards written in Java as opposed to Rust.
Java is much older and bigger than Rust. I don't think its level of database tooling is noticeably better than other big languages with less of a standard database API (e.g. Python).
> Regarding async - With Java 19 virtual threads , you have newer versions of JDBC drivers support async with none to minimal changes.
So it's only taken what, 15 years, and even then it's only been done because they found a way to make it happen without having to actually change the API (which it remains to be seen how well that actually works in practice - I'm pretty skeptical because other platforms have tried and failed with that approach).
All those complains always forget that the standard library happens to be available everywhere the language has a full implementation available, while third party libraries are hit and miss.
I rather have outdated code that works, than not having anything at all.
> All those complains always forget that the standard library happens to be available everywhere the language has a full implementation available, while third party libraries are hit and miss.
Only if your dependency management is bad.
And if you're stuck running on e.g. an old install of the language, there's at least a chance that the third party library will support it, whereas if you want a new standard library feature you're SOL.
Maven has worked well for over a decade. I understand cargo is similar.
If your dependency management is any good then there's nothing to lose and a lot to gain from splitting your libraries out and allowing them to have independent release cycles. You can always do "platform" releases that aggregate together a bunch of finer-grained libraries with versions that are known together if you have a use case for that; going the other direction is a lot harder.
> Did maven help to split out Java's standard library ?
I would say so, although splitting an existing standard library is still extremely painful and cumbersome thing to do (I don't think any other major language has seriously attempted it) - exactly why it's better to avoid having a large standard library to start with.
> Is Java's standard library small ?
I'd say no, but the parts of it that are used nowadays are. For example there is an HTTP client implementation in the standard library, but it's very rarely used by today's Java programs since its design is quite dated.
one approcach with golang is make interface that can be leaked if you really want, you will be using standard interface 99% but if you need sth not provided by abstraction you can ask for underlying impl `db.Driver()` and cast to appropriate type.
Right, but it's very hard to overcome the inertia of an established standard.
(JDBC isn't actually in the standard library IIRC but it's a JSR, so it's an "official" part of the platform and that creates a lot of pressure to follow that)
While in general I agree, database libs are probably the worst example of this - in practice you can't just swap sqlite for postgres or mysql. I usually avoid the generic driver wherever possible and use one specific to my database.
I would highly recommend against this. For example, postgres and sqlite differ in what they accept as meaning 'true' or 'false'. Try this with both databases, 'create table foo (whatever bool not null)', then insert 't', 'true', 1, true, 'f', 'false', 0, false. Postgres will reject 0 and 1. Now query for true and false values, like 'select count(1) from foo where whatever=true'. Sqlite returns 2! Postgres returns 3. Do the same for false, same results. Now ask Sqlite the count of 'select count(1) from foo where whatever != true and whatever != false'. It returns 4.
You are in for a world of hurt if you think your SQL statement parsing means that your application is going to work. Changing database backends, in my mind, amounts to a full rewrite of the data layer. Whatever quirks you have built around are likely to subtly change.
Oh, and one unsolicited tip. If you're using Go, don't bother with database/sql. Just use pgx + pgxpool from the get go. You aren't going to be able to plug some other database implementation in and get a working app. So the abstraction layer is basically unnecessary.
For what its worth, I think it would be great if standards groups wrote up robust test suites to go along with their standards.
For example, commonmark has a test suite with hundreds of machine readable input / output examples. Each example has some input markdown text, and an expected parsing result. Any compliant commonmark implementation can use that as a test / conformance suite during development. As a user, result is you can be pretty sure every spec-conforming implementation will interpret the same markdown text the same way.
During Chrome's development they built something similar. They gathered the HTML from hundreds of real websites and made browser-independent data sets showing how that HTML must be parsed. That freed up the team to try out wild optimizations, with certainty that they wouldn't break compatibility. When Microsoft copied Javascript into IE, I assume they did something similar to make sure they parsed javascript in an identical way to how Netscape Navigator parsed JS.
It would be harder for a complex protocol like bluetooth, but it would be fantastic to have a "bluetooth tester" device which tried out a lot of the strange things real bluetooth devices do to make sure any implementation is compatible with any other implementation. You can tell a device like this doesn't exist (or its not very good) because of how abysmal cross-device bluetooth compatibility is in general. (Linux doesn't work properly with either my bluetooth keyboard or mouse because, while both devices apparently work on windows and macos, they don't properly adhere to the spec.)
If SQL were a better written spec, it wouldn't have random incompatibilities like that.
You've actually described what true TDD is, you test requirements which leaves you free to change your implementation without fear that you have broken anything. Also (as you have pointed out) makes writing a faster implementation much easier.
Another downside is you lose access to all your database specific functionality, or at best makes it more complicated to use as you need to unwrap the abstraction layer.
Yeah well, if you read the doc, you know that SQLite doesn't support the boolean type.
"SQLite does not have a separate Boolean storage class. Instead, Boolean values are stored as integers 0 (false) and 1 (true)."
If you choose the right SQL and right data-types, you can definitely have database independence. Our integration product (>10 years old) supports 4 diff databases for customers with a single, common code base. As long as you have the test-suites, its definitely doable. But you need to use a language which has a standard for interacting with databases.
Parts of the library are interchangeable, but inevitably much of it is not. A Postgres and SQLite client might be able to share some query parsing logic between the two databases, but the wire protocol (or lack thereof for SQLite) is not the same, nor is the exact sub/super-set of the SQL standard implemented by each. Basically the only code you could reliably share is an AST parser for the most common subset of the SQL standard that's implemented by both.
Yeah, that's true, so that's one less feature they'd be able to share. Unless you're creating an ORM, there's not much left aside from connecting to the database (different for all of them), sending packets (different for all of them), and maybe parameterizing queries. So basically you can make a query builder. Then of course you can have "drivers" for each database, but I'd consider that philosophically different from using the same library "without switching code" (IMO separate drivers implies "switching code" since each driver needs to be maintained separately and depends on a different upstream dependency).
Agreed. Wrapping all db access in its own class or module is one of the easiest architectural decisions to be made in a project. Allows you to use specific drivers but isolates changes to a few easily found locations. Makes it dead simple to test too.
> In Go switching database drivers in as simple as importing the new library.
Neat!
Kinda like Java. (JDBC)
Kinda like C#. (IDb)
Kinda like Python. (DB-API)
Kinda like PHP. (PDO)
Kinda like Perl. (DBI)
Kinda like C/C++. (ODBC)
Of course, you work as the lowest common denominator and have abstractions that don't quite match the implementation. And let's be honest, how often do you switch databases.
But yeah this is very common.
---
I'd argue in practice, Java follows your advice the most of any community:
If you are familiar with Go, you're gonna be at home with Zig's standard library in general. `std.io.writer`, `std.io.reader`, `std.io.multiWriter`, etc. are all there and pretty similar to Go's interfaces.
> I wish more programming languages provided interfaces for building libraries around the std library like what Go does.
Java does, but apart from JDBC for databases, and the servlet-API, I haven't seen many cases where that really improved the world of programming. Java-EE relied heavily on such abstractions, but nowadays those would be replaced by simpler, less abstract constructs with less redirections.
> This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You don't have to change your code.
Database drivers are an abstraction that tends to leak it's underlying technology very fast.
The idea of just being to swap your database out under the hood is neat, but generally holds only for simple projects.
Also the abstraction that helps you here is SQL really, not the driver.
> I wish more programming languages provided interfaces for building libraries around the std library like what Go does. It makes using libraries a lot better because you aren't dependent on that library as long as it uses the std library interface.
So, standard interfaces, or dynamic or structural typing? Lots of languages have those (some all three.)
As someone who only has some minor experience with Go, can you maybe elaborate a bit on what you mean by the stdlib providing such interfaces? I don’t recall any such paradigm being called out when first learning the language and it sounds pretty interesting.
notice that the SQLite package isn't used directly, its only imported to handle the database behind the scenes. all the actual code is written using only the standard library
In Go if you'd like to make an interface-compatible database driver, you have to construct structs with non-exported fields from another package using unsafe.
database/sql provides a generic interface for SQL which drivers use to build their libraries. You can switch from one postgres driver like pq to pgx easily because they both support database/sql.
I just tried it out and it's very slow. Using `wrk` to hit a basic endpoint that just prints "hello", I got ~500 req/sec. Using a third party zig implementation, I got ~175000 req/sec.
It's also both cumbersome to setup and use.
Before learning Zig, I used to think Zig needed an http server in the standard library. After using it for a few months, and watching this implementation get added, I think it's a mistake - there just isn't enough bandwidth to support a _quality_ kitchen-sink included stdlib.
I do not think fast needs to be a goal? Great to have sure, but a slow, stable, compliant implementation is perfectly fit for purpose in the standard library.
If you are getting 500 reqs per second for a statically typed compiled systems programming language with no garbage collector on a modern machine, you have bigger problems than standards compliance.
1Ghz/500 is ~2Mhz per request. Taking 2 million cycles to respond to a request in a benchmarked environment on localhost means something has gone horribly wrong.
One of the things that made Go good is that the stdlib may not have everything, but what it has was production grade in terms of performance. I don't see value in spreading work out thin just to have something in the stdlib.
I wish Zig had Go-like interfaces. I understand why they don't as they have decided control flow must be explicit but having to include a heap of boilerplate to get the same result isn't a win for readability or maintenance. Given my limitations and tastes I would not want to write or maintain web backend code in Zig but there is a lot to like about Zig as a C replacement and having batteries included for things like HTTP is a win for any language.
> having batteries included for things like HTTP is a win for any language
I actually kind of disagree with this.
I was a super early Go contributor and helped a tiny bit on the HTTP library (back then it was the core `http` package, now it's under `net/http`). Imo, a lot of HTTP stuff tends to be very "grey area" (what are sensible timeouts? how do you handle different socket errors? should we allow self-signed certificates?) so a lot of opinionated design debate ends up happening, and there's also a lot of scope creep, including having to include things like SSL/proxying if you really want "batteries" to be included which is a lot of non-trivial work (all of a sudden, you also need a elliptic curve cryptography lib, too) that basically has nothing to do with the language itself.
I know we live in an "HTTP world" and it's a lot easier to sell a language that can also do stuff on the web, but it would be pretty far down my list.
Agreed about Go-like interfaces. I don’t think it’d fit Zig to make them dynamic at runtime the way Go does, but even just as a compile-time constraint it would make building a composable ecosystem like Go’s much easier. See writer: anytype.
There have been many arguments that Zig's Try/Catch is hidden control flow. Though many would argue that trying to stick to "catch phrases", can result in painting one's self into a corner, versus what's best to accomplish a task.
Lots of people love interfaces (or similar like "traits"). Helps them get what they need done and there are lots of examples of their usage in different languages. Accomplishing the task, is what's most important.
My understanding is http/2 is out of scope. The purpose is to have functional http to implement the package manager. I would expect a high performance http client/server to be a third party library, not std lib.
Sure. But I wouldn't do that for 1.0 of the lang. It sounds quite feasible to add support for that in 1.x if the community needs it. There's a reasonable path to upgrade to http/2 while spending more effort into what's critical for the lang to get to 1.0.
[Edit] And it's quite feasible to decide later that http/2 was never needed for this usecase.
Most source code packages, including their metadata, are generally small even as an aggregated compressed file, and it's very common to download a lot of them all at one time once you've resolved the necessary dependencies. It depends on several factors though -- including community ones (are lots of tiny packages encouraged?) and how things like the build system work when downloading things.
In practice, it's very much an appropriate use case for multiplexing, if you ask me. But not having it isn't a dealbreaker either, IMO. It's a bit more work to support HTTP/2 and can be done rather transparently later on anyway, since the underlying transport can be switched and has an upgrade path.
Regarding 1, do you mean that the server would send the response before the request has been received completely? Or is the response for a different request?
HTTP/1.1 pipelining is not the same as HTTP/2 multiplexing.
Obviously, one cannot multiplex with HTTP/1.1, but AFAIK there is still a question of whether someone can pipeline (not multiplex) with HTTP/2. HTTP/2, introduced by an advertising company and developed in part by CDN service providers, is designed for web pages that auto-load resources from a variety of hosts, namely advertising servers. It's commercially-focused.
Here is an old example of using HTTP/1.1 pipelining. For the basic task of fetching many files from same host over a single TCP connection.
Of course there is much more than one can do with HTTP/1.1 pipelining, such as fetching 100s or 1000s or pages, as "streaming" text/html, from a website in a single TCP connection. It is also possible to HTTP/1.1 pipeline POST requests. IME, HTTP/1.1 pipelining is generally fast and reliable, a very useful and convenient feature for web users, one that I have been using for two decades.
HTTP/2 proponents on HN will sometimes insist that HTTP/1.1 pipelining is entirely supplanted by HTTP/2 multiplexing, ignoring that the web can also be used for non-commercial, non-advertising purposes. They will argue that HTTP/1.1 pipelining is useless because it is not supported by web browsers and cannot support improved e-commerce accompanied by online advertising, data collection and tracking, e.g., websites comprised of resources from a variety of hosts, especially advertisers.
This is a mistake. HTTP/1.1 pipelining and HTTP/2 multiplexing are two different things and they can and they will co-exist. The web is not just for Google and other so-called "tech" companies. Nor is it only for their chosen HTTP clients, "Chrome" and what not. The web is not just for commerce. It is for non-commercial web users, too. It's open to all HTTP clients, including this one from Zig. The web is a public resource.
HTTP/2 was based on SPDY which was created by Google. HTTP/2, however, is an IETF standard. The IETF is not an advertising company. The claim that HTTP/2 was introduced by an advertising company is false.
The claim that HTTP/2 is a commercial technology is unsupported.
HTTP/1.1 pipelining is flawed. The problem is that if you pipeline requests for A, B, and C, the responses need to come back for A, B, and C in that order. The server can choose to either process pipelined requests serially or in parallel at its option. If the server chooses to process the requests serially it more or less defeats the purpose of pipelining. If the server processes the requests in parallel, it has a problem: let's say that A is a slow request and that B and C are fast requests generate a large amount of data - the server has to wait for A to complete before it can respond with B or C but in the meantime it has to store the responses for B and C somewhere. This is a great way to DOS a server: pipeline a bunch of slow requests followed by large ones and see if the server runs out of memory or disk space.
Servers don't support pipelining because it can cause a DOS. Clients don't support pipeline because servers don't and also because the head of line blocking caused by a slow response means that they will often get better results by opening up more connections than to try to pipeline things.
There isn't a conspiracy. Pipelining just isn't a good technology.
No, it's a great technology. For my purposes. Yours may be different.
I'm using the web for data retrieval. Text retrieval. I'm not interested in "interactive" web pages.
Every web user is entitled to their own opinion. I'll respect the opinions of those who like HTTP/2 so long as they respect the opinions of someone else who likes HTTP/1.1. From a privacy perspective HTTP/2 is flawed. But for me the more pertinent issue is that it's overkill. I'm using the HTTP/1.1 for data retrieval from the same host, where I want responses returned in the order they were requested, with HTTP headers. I'm not retrieving images, CSS and the like. I'm not looking at graphics. I'm retrieving text to be read in textmode. I have no need for "interactivity" and no need for multiplexing. For this purpose, HTTP/1.1 works beautifully. Nothing any HN commenter blurts out will change that fact. This reply was not even honest. "Servers don't support pipelining..." Where is this coming from.
I've been using HTTP/1.1 for over 20 years. I use it on a daily basis. As long as every httpd continues to support it, as they have for decades, I'll continue to use it.
Pipelining was designed to avert DoS by only opening one TCP connection. When RFC2616 came out, servers had problems with clients opening many TCP connections at once. This was considered poor netiquette. IETF wanted us to limit the number of connections to two.^1 Do so-called "modern" web browsers and other contemporary clients follow the old netiquette. As an HTTP/1.1 pipelining user, I only open a single connection. I'm still following the old netiquette.
What's funny about these replies trying to attack someone's use of HTTP/1.1 pipelining (which is quite strange if you asked me -- why would anyone care) is that the people making the replies have never tried to do what this person using HTTP/1.1 pipelling is doing. How could they claim it's "slow". Trust me, if it was typically slow I would not use it. Very rarely is it slow and even then it isn't any slower than making sequential TCP connections.
1.
Some excerpts
RFC 2616 HTTP/1.1 June 1999
8 Connections
8.1 Persistent Connections
8.1.1 Purpose
Prior to persistent connections, a separate TCP connection was established to fetch each URL, increasing the load on HTTP servers and causing congestion on the Internet.
Persistent HTTP connections have a number of advantages:
- By opening and closing fewer TCP connections, CPU time is saved in routers and hosts (clients, servers, proxies, gateways, tunnels, or caches), and memory used for TCP protocol control blocks can be saved in hosts.
- HTTP requests and responses can be pipelined on a connection. Pipelining allows a client to make multiple requests without waiting for each response, allowing a single TCP connection to be used much more efficiently, with much lower elapsed time.
- Network congestion is reduced by reducing the number of packets caused by TCP opens, and by allowing TCP sufficient time to determine the congestion state of the network.
- Latency on subsequent requests is reduced since there is no time spent in TCP's connection opening handshake.
8.1.2.2 Pipelining
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received.
8.1.4 Practical Considerations
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.
HTTP/1.1 is supported by all browsers, right? Using it doesn't give up any interop. At least until said ad company develops enough of a monopoly to move into the Extinguish phase.
I don't know anything about pipelining or multiplexing, but I too have the impression standards bodies are now dominated by large browser vendors. Older standards therefore seem less corporate.
And interop is quite valuable for web users. But for the so-called "tech" companies and CDN service providers authoring the HTTP/2 RFCs, maybe not so much.
Who receives the primary benefit of HTTP/2. Certainly not web users. Perhaps they get some small secondary benefits.
What does not make any sense to me is why past/present Googlers and other HTTP/2 proponents voting and replying on HN are offended by someone who likes using HTTP/1.1. For pipelining. The (non-browser) interop is much better than HTTP/2. That is, using 1.1, I can pipeline HTTP to/from almost every httpd on the internet, using a vast array of TCP clients written over a long period. If I want to use HTTP/2/, the number of libraries and clients is much smaller and all are recent. Further, AFAIK these clients cannot pipeline the way 1.1 does, retrieving many files from same host sequentially over single TCP connection, in the order they were requested, with HTTP headers. Google existed when RFC2616 came out. If it is so flawed then why not try to change it then. HTTP/2 is flawed for what Google and other so-called "tech" companies want to do, always with a browser or mobile OS they control, not necessarily what web users want to do, with whatever clients web users choose, 100% of the time. We've seen the stuff so-called "tech" companies get up to and it usually involves surveillance to support commerce. HTTP/2 isn't going to solve or alleviate any of those ills.
To keep the Wall Street analysts happy, Google will not be adding to their browser, inspiring and backing standards that translate to less profit for Google. If a new standard decreases the amount of data collection or tracking, that's less profit for Google. HTTP/2 is not such a standard.
Many people are not aware that - currently - Zig completely rebuilds your entire application with every compilation - including any parts of the standard library you use. There is not yet any incremental compilation.
It takes a lot of time investment to make compilation fast, and we have gone all in on this investment. There was almost an entire year when not much happened besides the compiler rewrite (which is now done). Finally, we are starting to see some fruits of our labor. Some upcoming milestones that will affect compilation speed:
* x86 backend (90% complete) - eliminates the biggest compilation speed bottleneck, which is LLVM.
* our own self-hosted ELF linker - instead of relying on LLD, we tightly couple our linker directly with the compiler in order to speed up compilation.
* incremental compilation (in place binary patching) - after this Zig will only compile functions affected by changes since the last build.
* Interned types & values. Speed up Semantic Analysis which is the remaining bottleneck of the compiler
* Introduce a separate thread for linking / machine code generation, instead of ping-ponging between Semantic Analysis & linking/mcg
* Multi-threaded semantic analysis. Attack the last remaining bottleneck of the compiler with brute force
Anyway, I won't argue with cold hard performance facts if you have some to share, but I won't stand accused of failing to prioritize compilation speed.
Zig is working on incremental linking, and even hot-code-swapping while your program is running. I would suggest Zig cares more about compilation speed than most other languages, just hasn't gotten there yet.
The Zig compiler is significantly limited in performance by the llvm backend. Even with a different backend, I also suspect that the language is already complex enough that it is difficult to write a genuinely fast compiler (by which I mean a compiler that can produce good machine code from most files less than say 10k lines of code in under 50ms, which I am quite certain is possible -- but llvm takes about 50ms just to start up and is also slow once it actually starts doing work.)
To write a fast compiler, it has to be fast from the start. It is extremely difficult to make a slow compiler fast because usually there are pervasive design issues that cannot be eliminated by hotspot optimization. These design decisions often reflect the design of the language itself, which is to say that some languages are more amenable to fast compilation than others and I suspect that Zig is at best average or slightly above average for languages of similar expressivity.
> These design decisions often reflect the design of the language itself, which is to say that some languages are more amenable to fast compilation than others and I suspect that Zig is at best average or slightly above average for languages of similar expressivity.
Extremely correct. In particular, if care isn't taken at the initial design phase of a language to ensure compilation isn't slow, it's likely going to end up being just that. Much in the same way untested code is likely to be incorrect.
Note that I used "isn't slow" instead of "is fast" deliberately. You don't need to, nor should, stress about low-level optimizations, you just need to get the high-level design right. A poorly written quicksort will run circles around the most optimized bubble sort.
You're correct, Zig is self-hosted, with its own non-LLVM backends heavily in development; lots of wildly incorrect statements about both Zig and Go in this thread.
Yes, and when it was announced, it took 2 minutes and IIRC 2-4GB of ram to compile (I believe this was with the LLVM backend). I consider 2 minutes to self compile a language slow. I have personally written a compiler for a simpler language than zig using only lua to bootstrap it and it takes less than a second self compile and 10s of MB of memory. But again, this is possible because this language is much simpler yet in some, but not all, ways it is significantly more expressive than zig.
WRT to the other backends, I was aware of them which is why I said _even with a different backend_. I don't think they have a prayer of self compiling zig in under a second with the current language design with any backend but I'd be happy to be proved wrong.
Zig is plenty fast for a compiled language with optimizations. The Go compiler has no debug/release profiles and the generated machine code isn’t as optimized as say a language using llvm. If you compare it to C++ or Rust, Zig actually has better compile times, and it keeps improving.
It's not literally every line, but `try` means "unwrap the error union result or pass the error back up to the caller". It's roughly equivalent to Rust's `?` operator (although Rust's can also unwrap optionals or return `None`)
It makes it obvious that those calls can fail. Every single write to disk, network or malloc can in theory fail. This just makes it obvious that some code needs to handle those edge cases (or, ykno, you can pass them on to the human user).
In addition to the other answers, this is the most logical way to handle most errors in this specific type of code. The http code in the Zig standard library doesn’t know how the programmer wants to handle all the errors, so it uses try to bubble up any error it doesn’t know how to handle.
Why so many uses of try? Because Zig is all about readability, and anytime you see try you _know_ that the called function can possibly fail.
Rust used to use the word try in a similar context, but was considered too noisy, and opted for ? instead, which I think worked out very well in practice.
Rust's ? is the Try operator, which is currently contemplating Try::branch() to produce a ControlFlow and then if the ControlFlow is Break it returns whatever is inside the Break.
Historically ? was just doing what try! did but while that's equivalent to the current behaviour for Result, there are several more implementations of the Try trait today (and in nightly you can implement it on your own types)
What are you doing with lines 1 and 2? Most people just `return fmt.Errorf("stuff the caller can't know: %w", err)` and call it a day. To me, this beats exceptions because a human can provide context; what iteration of the loop were we on, what specific sub-operation failed, etc.
Spending a lot of time typing in error handling code seems OK to me. You are never going to get paged when your software is working. So you'll probably spend more of your time dealing with rare error conditions. Thus, the code spends a lot of lines capturing the circumstances around the error conditions, so you can go change a line of code and redeploy instead of spending 6 months debugging.
I don't think they're arguing that exceptions are better, just that `try` (in Zig) or `?` (in Rust) is nicer than `if err != nil { ... }`. It's the one thing about go that I really wish had better ergonomics.
i think it justifies a http server in stdlib in today's world, everything is connected, it should also include a TLS 1.3 implementation(it has some basic TLS based on bearssl), both are essential nowadays.
Is there a Rust vs Zig war going on? I just started to learn Rust because I figured that it was going to be the C++ successor and I want to be sure that my skills are (somewhat) future-proof. So what's going on with Zig? Should I learn Zig?
You can just ignore Zig. It will be there for you when you want it, and it will take you less than an hour to learn when the time comes that you want to use it.
zig is as small as c, perfect for my brain to actually be able to remember without checking some 100s page reference book like other languages have, learning it now.
my only 'wish' for zig is to replace its {} with indentations kind of like nim/python for readability.
It's a language made for people who like to do things a certain way both in terms of technical details (eg no macros, no hidden control flow) and when it comes to governance of the project (small, independent, non-profit organization free from big tech influence). While we do believe that there's value in this approach, this doesn't mean any other way of doing things is now obsoleted in favor of what we're doing :^)
More in general, Zig and Rust are different enough that if you spend enough time to properly evaluate both, you will probably find one more congenial to you than the other. I like Zig because I like simplicity, somebody else might like Rust because they like sound memory safety guarantees and a powerful type system.
You shouldn't see Zig as a threat to Rust, also because Rust already succeeded pretty much, and in fact I would recommend Rust as the much safer bet if you're looking for a new language to bet on, all else being equal.
> Is there a Rust vs Zig war going on?
Uhhh, no? There are certainly different sensibilities at play when it comes to the different communities, but that's it. I think I understand where this is coming from, but I really feel compelled to point out that I find crazy how people get really worked up when it comes to programming languages and immediately assume there has to be a fight for ultimate dominance there, while instead are completely oblivious about actual wars being fought under their noses. If you want to look at a war, watch Deno vs Bun, where different VCs have backed a different horse, and now they truly are in a battle for dominance. The ZSF has taken no VC money, has no big tech company brand to bolster, and lives off of donations. We just want to be able to move forward with the development of Zig and, for as long as we can do it, all is good, no need to convert the entire planet.
If you do end up evaluating Zig, I would recommend reading these two non technical blog posts about the ZSF & the people involved with it:
My understanding is that rust is a better C++, Zig is a better C. They're better than their respective antecedents in different ways (Rust is memory safe, Zig is ergonomic).
... wat? imperative; compiled to machine code, either static exe or dynamic libs; builtin RAII & polymorphism (parametric, subtyping); maps, vectors, trees, and other data structures in the stdlib; raw access to pointers; C ABI compat. there are some big differences, but "nothing like c++" is absolutely not true.
I see this line repeated a lot, especially in Zig spaces, but it doesn't really make sense to me. Rust is an excellent replacement for C, even though it has more features. What makes C the language of choice for so many applications is not its lack of features.
> I see [Rust : Zig :: C++ : C] repeated a lot, especially in Zig spaces, but it doesn't really make sense to me. Rust is an excellent replacement for C, even though it has more features.
It’s the feels, not the features :) Rust code feels like C++ code, at least when you’re reading it (I haven’t written any worth talking about). It puts the problem domain in similar terms, it has similar transparency (or lack thereof) regarding what it’s copying or allocating or whatnot, and so on. In that respect (!) they are closer to each other than C is to either—at least vanilla C, not a DSL (“language overhaul mod”?) like GObject. And it makes sense to target the C side of that divide in a new language (though again I lack the experience to say to which degree Zig succeeds in hitting that target).
The feels are not very accurate. Rust adopted C++-like syntax to look less weird to C++ programmers, but semantically it is quite a different language. It's more like a low-level Ocaml than C++.
Notably:
• Rust doesn't have inheritance. This makes a lot of basic C++ programming patterns unfit for Rust, and prevents 1:1 translation of C++ to Rust.
• Rust's generics may look like C++ templates, but they're not. Rust's macros behave more like C++ templates, and generics are closer to C++ concepts, but neither is a close match. C++ programmers are generally flabbergasted how hard is to make a function that takes any integer type in Rust.
• Even though Rust copied C++ moves, and has "RAII", the way these are used in practice ends up different due to having opposite defaults and different guarantees. Rust doesn't have constructors. Its closest equivalent of exceptions is for a different purpose. Rust's types are always movable, don't have meaningful addresses, can't reference own fields. Even C++'s std::string is hopelessly incompatible with Rust.
C++ has two string types, because of C legacy. Rust has two (and more) string types to express different modes of ownership. C++ is a complex language, because they keep adding more ways to initialize a variable. Rust has exactly one way. But Rust has many other features, mostly in its type system, to define thread-safety, memory-safety, and memory management in detail that is beyond what C++ can express. So "but they're both big" is glossing over all the reasons why.
However, C happens to be almost a clean subset of Rust. You can take a C program and translate it line by line to Rust. It won't be idiomatic, but may be easy to refactor into proper Rust. That's generally not true with C++, which requires rethinking everything from basic idioms and constructs to the overall architecture. This is why Rust struggles with GUI libraries, and the best-supported native toolkit is from C.
> However, C happens to be almost a clean subset of Rust. You can take a C program and translate it line by line to Rust.
I can't see how that's possible, unless you litter your code with `unsafe` all over the place. Rust requires a lot of restructuring around ownership and borrowing to get your code to even compile.
I've done it with a few libraries. They usually have defined ownership, but it's specified in the library's manual, not in the code. Even unusual patterns can generally be mapped to Cow or Option or worst case some custom smart pointer with minimal amount of unsafe. But very often merely converting pointers to & vs &mut vs Box as appropriate gets the job done.
The most common sin is C libraries taking just a pointer and hoping for the best, instead of pointer+length for buffers (slices). But that's also a "local" problem you can fix by adding an extra argument, generally not a major redesign.
Thread-safety tends to be worse to map, because C reasons about safety of function calls, while Rust about sharing of data types. So "it's safe to call foo unless option bar is set to -5" doesn't translate well.
C++ is the most important source inspiring Rust throughout its development. Rust devs just like to downplay that role.
- Rust has interface inheritance with pretty much the same syntax
- Rust’s generics are monomorphized, exactly like C++ templates. They also have the same syntax. The only difference is that Rust generics are trait bound.
- Rust RAII is used exactly as it’s used in C++. Moves in Rust are destructive, while in C++ they are not. At the time where move semantics were proposed for C++ in the C++03 era, both types were proposed and the committee decided to go with non-destructive moves.
- Scope resolution syntax is also a carryover from C++.
When the words superset and subset are used in regards to C, C++ and Objective-C, it is quite a misunderstanding to describe C as a subset of Rust.
Sibling comment also says "has interface inheritance with pretty much the same syntax", but this is another case where people see the same ASCII char and think it's the same feature, when it's so different.
The A:B syntax is not inheritance, but a syntax sugar for 'A where Self:B' bound.
The result is these remain separate traits without support for subtyping. dyn trait A:B can't be used where B is required, unless you have access to the concrete type and make a new vtable for B from scratch. There's WIP to fix that, but not here yet.
The auto traits that look like subtyping are for built-in marker traits only.
Lack of data inheritance is painful. There's no support for fields in traits. Getters/setters are problematic due to borrowing all of self instead of just the field.
Traits can't have private or protected methods.
Traits aren't inherent to the types, so you have to import both the type and the trait to use it. It's messy and makes interface docs confusingly fragmented.
So Rust really really isn't an OOP language, even though you can put together an awkward-to-use imitation from a few ill-fitting features — but they use the same sigils as OO in C++!
Rust limits programs (even when you use unsafe) to those you can prove correct to the copmiler, and it has a habit of taking on more dependencies than necessary for a given task (using syscalls and locks when not required, allocating frequently, ...). It's a nice language, but it made tradeoffs, and a meaningful fraction of C code would be hard to port to or even link from Rust.
It's been a long time since I've programmed in C, but AFAIK the major pain points were and continue to be the build system and the preprocessor (or consequences of those). Zig does seem to fix in a pretty simple way
It is not "better" it is different. Rust is "safer" although with the modern C++ along with tooling I do not really encounter any safety issues for about 3 years already. Looking at feature set C++ has way more. In my opinion Rust sits somewhere between C and C++.
I don't want to get grabbed on a side street and beaten but... Rust is language designed to solve a process problem (bad buggy code) with a straitjacket language approach. It is pedantic to the point of being unreadable and in it's own problem domain (system, embedded and up) has been ticking the wrong boxes.
Zig is substantially better yet I think a better language is still around the corner. Having said that, you absolutely can't go wrong mastering C. Zig wraps C in a way that doesn't choke you on configuration issues. Zig's out-of-the-box support for many target's is a savvy decision too.
However, both Zig and Rust make some of the same errors so I will put my plea here for future language developers. How you work with Threads, Async, Interrupts are all implementation details. Please don't make them part of the language. A good Function type is sufficient to make elegant solutions that do not require language support.
Rust is by far the more mature option if you're really trying to replace C++. Zig is personally interesting to me though. But if you need to write actual production critical code then you should definitely go with Rust.
Honestly, it's still a little bit surreal to me that I won't even consider C++ now for a new project after so many years of my career.
idk about a "war" or we, but features aside, zig feels like a less obnoxious language, and i mean that as politely as possible. to make an analogy, rust is like a friend that constantly nags about _everything_; like you know they mean well, but girl chill i know. about to cross a street, rust reminds you to look both ways. empty street with no cars around and it's safe to cross, rust wont let you cross until the crosswalk sign lights up. cooking something, rust will pester you about proper knife safety. like yes i know, being a nuisance about things i already know about doesn't help me write better software; i already know to look both ways before crossing. every once in a while it catches something you wouldn't have otherwise, but you have to deal with that constant pestering _all_ the time. also, just my personal opinion, but rust has some of the worst language ergonomics i've seen in any modern mainstream language and i don't think it can reasonably be fixed due to its 1.0 promise
Rust is annoying when you are doing something you know is sound, "yes I looked both ways and re-checked twice, chill out OK", but the compiler yells at you anyway. Zig is great in such situations. Rust really shines when you are doing something you are not confident at all in—juggling 17 knives, in a large codebase with many contributors that you are new to, and you haven't had any coffee—and you can relax because you know the compiler will catch your soundness mistakes.
I find rust easy to use when you are doing high level programming, but it becomes very annoying language when you are trying to do something low level, or trying to manage memory yourself.
"Rust vs. Zig" is like "Ruby vs. Python": both languages occupy more or less the same space, but have a rather different approach to things. Which is "better"? It's kind of a matter of preference and taste.
That said, Zig is still very much in active development and isn't stable. For example the upcoming 0.11 release will change some syntax. Personally I like Zig, but you need to be prepared to deal with these kind of changes and instabilities for the time being, much like early Rust adopters had to before 1.0. Also the Zig ecosystem is less mature: fewer libraries, learning resources, etc. Purely objectively speaking, I think that's the biggest difference at this point.
I hear this repeated quite often in Zig crowds, but Rust is absolutely, 100% aiming to displace C. There's no way it would have made it into the Linux kernel otherwise. It just so happens that it can also replace most of C++, which is why it's also found in the Windows kernel.
Not sure I follow your line of reason. From a technical standpoint, I see only one reason why Rust could not completely replace C. And that reason is obscure hardware platforms that LLVM doesn't target. Otherwise, it's superior in every way I can imagine. More complex? I guess that depends on your project. If you work mostly in small code bases where it's easy to get things right, then I'd say it's a toss up. But if you're dealing with a large multi-threaded code base, with lots of heap allocations and shared ownership etc... Rust will be far less complex than C. I highly doubt the Linux kernel developers introduced Rust to add more complexity. They did it to reduce complexity in the system, and make it more accessible to new developers. The Rust compiler is soooo liberating once you get over the initial learning curve, which I admit can be a little annoying.
There are a vast number of “obscure hardware platforms” especially once you include embedded systems. There are also many circumstance where you must write unsafe code. While you could in theory use Rust for most cases where C is being used, there are a lot of cases where it would be a bad fit. There will never be one language that can do everything well.
For those obscure hardware platforms, I agree that Rust isn't a good tool for the job. (Although there are GCC integrations for Rust too, which may solve those issues in the future). But arguing that having to write unsafe Rust means it's a bad fit, makes no sense. 100% of C is "unsafe" in the sense of the word, whereas you may reduce that close to 0% with Rust. Even if you only got to 50%, that's a large chunk of your code base that you have certain guarantees about, simply by compiling it. Anyway, my whole point here is that I haven't heard any good arguments about why Rust isn't a good C replacement, other than the fact that it cannot target as many platforms. If it targets the ones you care about, then it's a wonderful choice to replace C.
For a lot of cases, 0% unsafe code is pretty unrealistic. Using even 50% unsafe code means that you must be able to reason about how safe and reliable the code really is. At which point you are better off with a simple language that a good developer can quickly figure out what is happening. A complex language makes this harder, not easier.
I think we can just disagree here. But I have to assume you haven't done any significant development in Rust, or at least haven't experienced the freedom it gives once you get over the initial awkwardness. I find Rust is easier to reason about than C, partially because it's more explicit. And that its compiler is better than a good C developer. I've mentored many veteran embedded C developers who scoffed at Rust with a similar attitude. Within a month or two though, they all changed their tune and came to prefer it.
You are basically attacking a strawman. It is not Rust vs. C, but Rust vs. alternatives to C. It is unlikely that Rust will end up being better than all alternatives that are being developed.
Not any more than there was a C/C++ war. C++ is for people that aren't scared of features and want their code to be reliable. C is for people that really value simplicity and don't care about the odd segfault or catastrophic security vulnerability (in some situations that is reasonable).
Nim has been unpleasant trying to get https client working with a staric build because it needs openssl. Does this support native TLS and static linking?
This is huge for things like database drivers which might become outdated or not support certain features. In Go switching database drivers in as simple as importing the new library. You dont have to change your code.
In rust for example you have to go out and pick a database driver and no two libraries will work the same. If you pick one postgres library and it becomes outdated you have to go rewrite you code to support the next one to move too.
This is why I would never use Rust, or Zig for being things like http servers.