That’s pretty presumptive of how obviously the author could improve it. As someone who writes a lot of docs, I find feedback and preferences varies wildly. They may just have well made it “worse” to your preferences by hand editing it more.
No offense but you sound more like a “principle coder”, not a principle engineer. At least in many domains and orgs, Most principal engineers are already spending most their time not coding. But -engineering- still take sip much or most of their time.
I felt what you describe feeling. But it lasted like a week in December. Otherwise there’s still tons of stuff to build and my teams need me to design the systems and review their designs. And their prompt machine is not replacing my good sense. There’s plenty of engineering to do, even if the coding writes itself.
What's the language you're thinking of that has more of these decisions fixed in the standard library? I know it's not Ruby, Python, Rust, or Javascript. Is it Java? I don't think this is something Elixir does better.
You're answering the question of "which languages have UUIDs in their standard libraries" (Javascript is not one of them). That's not the question I'm asking. If you wrote a new Python program today that needed to make HTTP requests, would you rely on the stdlib, or would you pull in a dep? In a Java program, if you were encrypting files or blobs, stdlib or dep?
Is C# the language that gives the Go stdlib a run for its money? I haven't used it much. JS, Python, and Ruby, I have, quite a bit, and I have the sprawling requirements.txts and Gemfiles to prove it.
I asked the question I did upthread because, while there are a lot of colorable arguments about what Go did wrong, a complete and practical standard library where the standard library's functionality is the idiomatic answer to the problems it addresses is one of the things Go happens to do distinctively well. Which makes dunking on it for this UUID thing kind of odd.
> If you wrote a new Python program today that needed to make HTTP requests, would you rely on the stdlib, or would you pull in a dep?
For a short script, the standard "urllib.request" module [0] works pretty well, and is usually my first choice since it's always installed. For a larger program, I'll usually use a third-party module with more features/async support though, but I'll only do this if I'm using other third-party dependencies anyways.
> JS, Python, and Ruby, I have, quite a bit, and I have the sprawling requirements.txts and Gemfiles to prove it.
I checked the top 10 Go repositories on GitHub [1], and all but 1 of them have 30+ direct dependencies listed in their "go.mod" files (and many more indirect ones). Also, both C and JavaScript are well-known for their terrible standard libraries, yet out of all languages, JavaScript programs tend to use the most dependencies, while C programs tend to use the least. So I don't think that the number of dependencies that an average program in a given language uses says anything about the quality of that language's standard library.
Fair enough, but the quality/breadth of the standard libraries is fairly topic-specific in Go (and all languages, really). There's a reason that you picked networking and crypto for your examples, since the Go standard library is indeed really strong here—I don't even like Go, but if I had to write a program that did lots of cryptography and networking, then Go would probably be my first choice.
But lots of programs (and most of the programs that I write) don't use any cryptography, and only have trivial networking requirements, and outside those areas, I'd argue that the Python standard library [0] has broader coverage, supports more features, and is better documented than the Go standard library [1].
The Go standard library is still pretty great though, and is well ahead of most other languages; I just personally think that it's a little worse than Python's. But if you mostly write networking/crypto code, I can easily see how you'd have the opposite opinion.
Like, at this point, I feel like we share premises. We disagree, but, fine, seems like a reasonable disagreement. A better one than how annoying it is that Golang lacks "basic stuff" like a standard UUID interface.
> I checked the top 10 Go repositories on GitHub [1], and all but 1 of them have 30+ direct dependencies listed in their "go.mod" files (and many more indirect ones).
Big projects having big dependencies, whoopty fucking do
I gave that example to refute the point that having worked on projects in other languages with tons of third-party dependencies implies that those languages have poor standard libraries. I certainly didn't intend to imply that that means that Go has a poor standard library or that small Go projects often use hundreds of third-party dependencies like JavaScript projects tend to do.
Go's package management is actually one of its strongest points, so I think that it's unsurprising/good that some projects have lots of dependencies. But I still stand by the point that you shouldn't judge a language based on how many dependencies most programs written in it use.
(Except for JavaScript, where I have no problem judging it by the npm craziness :) )
No one is debating whether Go is missing a uuid package from its standard library; the debate is about whether this is indicative of a general trend with the Go standard library (as the gp claimed above).
If you’re arguing as the grandparent did that Go regularly omits important packages from its standard library, then it’s not unreasonable to ask you for your idea of an exemplary stdlib.
I think the closest thing to an exemplary stdlin is rust's std. I think it strikes a good balance between being too small or too big. Unfortunately, what rust doesn't really have (yet) is a "blessed" set of libraries to supplement the stdlib.
The problems with a big "batteries included" standard library like go or python strives for, are that you will inevitably leave things out that at least some people consider "basic", like uuid, and parts of stdlib will probably have lower quality than third party libraries, like pythons urllib or gos log package, and it is constrained by needing to maintain backwards compatibility and being tied to the language's release cycle.
I didn’t have a strong opinion on standard library sizes until I tried Rust, but Rust was not a very pleasant experience because there were not standard packages or even consistent community advice for things as pervasive as error handling or async.
But I understand people have different opinions and that there will never be a standard library that appeases everyone. I don’t think “having a batteries included library” aims to appease everyone either. But mostly I just don’t understand the people who criticize Go for not having enough things in their standard library because Go’s standard library is one of the more useful (which may not be everyone’s ideal).
My first, and primary, programming language was C# which includes probably too large a standard library. It was definitely a surprise to see how minimal/simple other standard libraries are!
But they're there. I would much rather have a stdlib option which isn't as good as the most popular third party solution, than not have an stdlib option at all. You can't always count on being able to install stuff from pypi, but you can always count on the stdlib being present.
Broadly speaking, maintaining a big std lib is a huge amount of work, so it makes sense that a language team is conservative about adding new surface to a stb lib which they will then have to maintain for a long time.
The work involved in maintaining a standard library is things like bug fixes. A larger standard library (or multi versions) means there's more likely to be bugs. You also have performance improvements, and when new versions of the language come out which has features to improve performance, you will most likely want to go back through and refactor some code to take advantage of it. You will also want to go through and refactor to make code easier to maintain. All of this just gets harder with a larger surface.
And the more stuff you pack into the standard library the more expertise you need on the maintenance team for all these new libraries. And you don't want a standard library that is bad, because then people won't use it. And then you're stuck with the maintenance burden of code that no one uses. It's a big commitment to add something to a standard library.
Every library is a liability especially in terms of api. There are many example where the first take on a problem within a std lib was a bad choice and required a major overhaul. Once something is in standard library it’s literally impossible to take it back without breaking the world if you don’t control api consumers
Yes, in python they break something at every release now. It's terrible. It mostly is because they remove modules from their standard library for no good reasons.
For example they've removed asyncore, their original loop-based module before the async/await syntax existed. All the software from that era needs a total rewrite. Luckily in debian for now the module is provided as a .deb package so I didn't have to do the total rewrite.
edit: as usual on ycombinator, dowvotes for saying something factually true that can be easily verified :D
The thread is about the code in the std lib being a huge amount of work because the code in the std lib needs to be kept working with new language releases.
And then you answered about downstream code breakage totally outside the std lib.
What would that entail, just a package whitelist? A few renamed packages? In the python 3 transition they renamed urllib2 to just urllib, but now it's almost a dead battery too and you want to use requests.
Honestly the problem was they did not go far enough. They hoped to have a minimal churn switch to avoid getting locked into bikeshedding for the rest of time. However, there was so little user user facing improvements that the required change hardly seemed worth porting. They also failed to initially offer any automatic porting tooling which could have increased adoption.
I will be forever mad that they did not use that as a breaking opportunity to namespace the standard library. Something like: `import std.io` so that other libraries can never conflict.
the idea of what 'batteries included' means has changed a lot in the past twenty years, and like most Go quirks , probably Google just didn't need <missing-things>.
16 random bytes is not a valid UUIDv4. I don’t think it needs to be in the standard library, but implementing the spec yourself is also not the right choice for 99% of cases.
If you generate random bytes, which are unlikely to conform to the UUIDv4 spec, my guess is that most libraries will silently accept the id. That is, generating random bytes, will probably work just work.
I didn't say about 16 random bytes. But you're almost there. You generate 16 random bytes and perform few bitwise operations to set version and variant bits correctly.
Not that it matters. I don't even think that there's a single piece of software in the world which would actually care about these bits rather than treating the whole byte array as opaque thing.
I think it saves labor and eventual bug hunting to include these in a stdlib. We should not be expected to look up the UUIDv4 spec and make sure you’re following it correctly. This feels like exactly what reasonable encapsulation should be.
I had a similar thought a while back. Looking at the code for existing UUID projects, issues they fixed, and in some cases the CVEs, is a good way to form a different opinion.
Some things are actually hard to implement. I'd spent a lot of time trying to implement concurrent hashmap, for example. UUID is not one of these things.
I was disappointed by Go's poor support for human-focused logging. The log module is so basic that one might as well just use Printf. The slog module technically offers a line-based handler, but getting a traditional format out of it is painful at best, it lacks features that are common elsewhere, and it's somehow significantly slower than the json handler. I can only guess that it was added as an afterthought, by someone who doesn't normally do that kind of logging.
To be fair, I suppose this might make sense if Go is intended only for enterprisey environments. I often do projects outside of those environments, though, so I ended up spending a lot of time on a side quest to build what I expected to be built-in.
I haven't explored enough of the stdlib yet to know what else that I might expect is not there. If you have a wish list, would you care to share it?
What other languages have web socket support or JWT validation as part of their standard libraries?
I don’t disagree with you btw. JWT signing/verification and JWK support is especially something I would prefer had first class implementations. (I’m much less concerned with JWT parsing, which is trivial).
Go has one of the best stdlibs of any language. I'd go as far and say it's the #1 language where the stdlib is the most used for day to day programming. cut the bullshit
You can't comment like this on Hacker News, no matter what you're replying to. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
I cannot identify with this at all. We have Python and Go applications in production, and for Go the vibe is mostly "standard library plus a few dependencies" (e.g. SQL driver, opentelemetry) whereas with Python it's mostly "we need a dozen libraries just to get something done".
For example Go has production ready HTTP server and client implementations in the standard library. But with Python, you have to use FastAPI or Flask, and requests or httpx. For SQL there's SQLAlchemy I guess and probably some other alternatives (my Python knowledge is not that great), whereas again with Go the abstraction is just in the standard library and you only include the driver for the specific database.
We use Renovate to manage dependency upgrades. It runs once a week. Every Python project has a handful or more dependency upgrades waiting every week, primarily due to the huge amount of dependencies and transitive dependencies in each project. The Go projects sometimes have one or two, but most of the time they're silent because there is nothing to upgrade (partly due to just having so few dependencies to begin with).
I don't buy this for one moment. Python and breaking changes are lovers. Nobody I have ever worked with builds or tries to build stdlib python. Most Go devs to pride themselves on minimal dependencies.
Eh accuracy and reliability is a different topic hashed out many times on HN. This thread is about productivity. I’m a staff engineer and I don’t know a single person not using AI. My senior engineers are estimating 40% gains in productivity.
And every time the issue is side-stepped by chatbot proponents.
Accuracy and reliability are necessary to know real productivity. If you have produced code that doesn't work right, you haven't "produced" anything (except in the economic sense of managing to get someone to pay for it).
For example, if you produce 5x more code at 5% reliability, the net result is a -75% change in productivity (ignoring the overhead costs of detecting said reliability).
We’ve all been waiting for the reliability shoe to drop for, what, a year now?
It’s only slop of you don’t understand the code, prompt, and result, and skip code reviews. You can have large productivity gains without reducing quality standards.
> It’s only slop of you don’t understand the code, prompt, and result, and skip code reviews. You can have large productivity gains without reducing quality standards.
So essentially like delegating all work to a beginner programmer only 10x more frustrating? Well, that's not what I would classify under "Pocket PhD" or "Nation of PhDs in a datacenter", which is the bullshit propaganda the AI CEOs are relentlessly pushing. We should not have to figure this out for them - they were saying this will write ALL code in 6 months from "now", the last time "now" being January 2026, so in little over 4.5 months. No, we should not be fixing this mess, f*k understanding the prompts and doing the code reviews of the AI slop. Why does it not work as advertised?
I’m not here to defend bs propaganda. I don’t think I’ve seen anyone defend that stuff. I don’t know if you’re shifting goalposts or that’s what you’ve always been worried about.
I’m just saying the productivity gains are real, even in serious production level and life critical systems.
If you are only able to think in binaries, no-AI or phd-AI, that’s a you problem.
Exactly this. High productivity if all you do is generate slop and brainrot videos. If you are going to generate code with it... well how productive was the genius at the AWS who used Kiro to cause that December outage ? 3 years ago that would have been a career-ending choice of productivity tools.
I can only explain it by people not having used Agentic tools and or only having tried it 9 months ago for a day before giving up or having such strict coding style preferences they burn time adjusting generated code to their preferences and blaming the AI even though they’re non-functional changes and they didn’t bother to encode them into rules.
The productivity gains are blatantly obvious at this point. Even in large distributed code bases. From jr to senior engineer.
I can see someone who is very particular about their way being the right way having issues with it. I’m very much the kind of person who believes that if I can’t write a failing test, I don’t have a very serious case. A lot of devs aren’t like that.
Sometimes you’re unable to write a failing test because the code is such that you can’t reliably reason about it, and hence have a hard time finding the cases where it will do the wrong thing. Being able to reason about code in that way is as important as for the code to be testable.
reply