You're arguing for charitable interpretations of statements by people who claimed Go didn't need generics or even that Go was better without them, saying that one should be able to change ones opinion without being called dumb. I fully agree.
Similarly, a charitable interpretation of what kubb wrote would be that they are referring to those who might have been dishonest in their defense of Go's lack of generics, which one might say kubb does indicate by using words like apologetes and zealots. The Internet is full of people who pick a team and will say dishonest things in perceived defense of it. I agree with kubb in this regard. That is the charitable interpretation of what kubb wrote, but instead you assumed kubb referred to everyone who ever voiced that opinion and suggested kubb should refocus on what kind of person they want to be.
The Internet needs more of charitable interpretations, and HN in particular. Perhaps I failed to interpret you charitably now? Nuances get lost easily in online debates... :)
I sort of agree. The most charitable interpretation is that kubb is “nutpicking”—addressing the least articulate and worst arguments of a community. But he would do us all a favor to acknowledge explicitly the boundaries of his criticism. For example, I hold the position that generics aren’t necessary, but that they will make some code more clear and a lot of other code less clear (and this has long been my position)—does kubb’s criticism apply to positions like mine? Am I his “zealot”?
Moreover, using terms like “zealot” to refer to people with whom one disagrees is very likely to inflame the thread (as indeed it already has, to a degree), whatever kubb’s intention.
I am not a Go expert so please correct me if I'm wrong, but... The fact that the stdlib HTTP utils in Go recover from panics in handlers and provide no way of changing this via easy setting is one of the things that really annoy me. Seemingly, they consider it a backwards compatibility break since it's a behavior change that would affect all existing middleware you try to use (see e.g. https://github.com/golang/go/issues/16542). You need to provide your own error handling middleware (of which several exist).
A beginner needs to be very careful and read the docs to catch e.g. this line:
"If ServeHTTP panics, the server (the caller of ServeHTTP) assumes that the effect of the panic was isolated to the active request. It recovers the panic, logs a stack trace to the server error log, and either closes the network connection or sends an HTTP/2 RST_STREAM, depending on the HTTP protocol. To abort a handler so the client sees an interrupted response but the server doesn't log an error, panic with the value ErrAbortHandler. "
I think that is an unintuitive assumption that makes for brittle software. Also, the fact that Go kind of blesses the use of panics internally in a library (e.g. "The convention in the Go libraries is that even when a package uses panic internally, its external API still presents explicit error return values.", https://blog.golang.org/defer-panic-and-recover) makes for even more brittle software, as predicting what state a control flow interruption like panic through a call stack leaves your data in can be challenging and require great care to avoid invariants being violated. Last I checked, e.g. Rust did not allow the reuse of data owned by a panicking call stack without explicitly asserting that it is still considered valid.
I guess I'm showing my bias for languages that make it harder to make mistakes, but I don't like brittle stuff like encouraging panic-like control flow for non-exceptional situations, implicit reuse of "panicked data", missing non-exhaustive match on enums, missing enums altogether, missing sum types and pattern matching making everything from detailed error management to proper state representation more brittle, etc etc. I'll stop there and not get into the rest of the stuff.
Go is one of the languages my employer pays me to write and I will say that I have a significantly higher appreciation for it now than when I started (it IS very beginner friendly and very ergonomic in general), but I wish it would help me more to write really robust and correct software.
> The fact that the stdlib HTTP utils in Go recover from panics in handlers and provide no way of changing this
It's comically easy to just wrap the root handler, catch panics yourself, and explode.
> You need to provide your own error handling middleware (of which several exist).
And you need to provide your own handlers anyways, which seems reasonable to me, unless you're against writing code?
> A beginner needs to be very careful and read the docs to catch e.g. this line:
Yes you need to read the docs to understand what a function does. This does not require being very careful where I'm from.
> Also, the fact that Go kind of blesses the use of panics internally in a library ... makes for even more brittle software, as predicting what state a control flow interruption like panic through a call stack leaves your data in can be challenging and require great care to avoid invariants being violated.
They make this pretty clear, that you shouldn't leak the panic, in which case it's just an implementation detail. If your library is brittle, that's on the implementer. You don't have to use panics for control flow. It's just something you can do. A tool. Sure, maybe it's sharp.
> I don't like brittle stuff like encouraging panic-like control flow for non-exceptional situations
I've never seen anything remotely encouraging use of panic for control flow. The blog post even states it's uncommon and unusual. I don't know how you interpreted that blog post, which reads to me as "here's how defer works", to being "you should use panic for control flow whenever possible."
I'm not a native English speaker, so I had to look up a definition of disingenuous to make sure I didn't misunderstand you. I promise you I'm trying to be candid with my own opinion, and not trying to deceive in any way. What would my nefarious purpose be? I'm just stating my opinion. Whether something is brittle or not is an opinion within a range to me, not an absolute. Perhaps my English is just poor. I'm sorry if you genuinely feel I was being disingenuous.
> It's comically easy to just wrap the root handler, catch panics yourself, and explode.
> And you need to provide your own handlers anyways, which seems reasonable to me, unless you're against writing code?
> Yes you need to read the docs to understand what a function does. This does not require being very careful where I'm from.
Yes, it's "comically" easy to do so if you know that you need to do it. I do it. I didn't say it was hard to do. But Go prides itself on being beginner friendly, having consistent behaviors and not having many ways of doing the same thing. The main function doesn't automatically recover from panics. Spawned goroutines don't automatically recover from panics. I feel like it wouldn't be outrageous to consider this an expected behavior for someone writing Go code, as very few libraries to my knowledge recover panics they didn't start themselves. The behavior of the stdlib HTTP utils diverge from that expected behavior. I think there are tons of developers in every language who don't necessarily scour the docs but assume consistency with some behaviors that seem basic and logical. Again, opinion, if that wasn't clear.
> They make this pretty clear, that you shouldn't leak the panic, in which case it's just an implementation detail. If your library is brittle, that's on the implementer. You don't have to use panics for control flow. It's just something you can do. A tool. Sure, maybe it's sharp.
Yes, they make it clear how it should be used if it is used, and I didn't claim otherwise. I also didn't claim that leaking it was the problem, but instead that maintaining invariants in the data of the library using the mechanism can be a hard thing to do, and whenever something is hard to do right it can contribute to brittle software.
> I've never seen anything remotely encouraging use of panic for control flow. The blog post even states it's uncommon and unusual. I don't know how you interpreted that blog post, which reads to me as "here's how defer works", to being "you should use panic for control flow whenever possible."
Yes, I will meet you half way here. :) This particular blog post doesn't encourage it, and at the moment I can't other any of the other places I've seen it. The blog post gives an example of Go stdlib using it, and then describes how to use it in a library if it is used. My personal preference would be to offer more caution or perhaps even discouraging in that blog post, but again that is personal opinion.
Summing up, I think Go is a language that both strives to be and is beginner friendly. Diverging from consistency in basic behaviors in the stdlib is not great, sharp tools not labeled as such is not great, etc, in my opinion. I know we don't agree and that's perfectly fine. I appreciate your reply. But I'm not trying to be disingenuous.
I would offer the same criticism of them, as it's the behavior in general that I don't prefer. But of course it's perfectly fine to disagree, this is all about preferences. I like fail fast, being required to handle errors and my tools (including programming languages) to very clearly help me identify where errors can occur.
This was actually more challenging to respond to than I thought it would be.
In Go (as the blog post that I linked alludes to): "The convention in the Go libraries is that even when a package uses panic internally, its external API still presents explicit error return values."
However, it is not the expectation that libraries should recover from panics they didn't start themselves. If they did, it would be very hard to panic and actually have a natural expectation that your application would exit due to that panic, which is how most Go code behaves and expects to behave.
As I wrote in another reply, neither the main goroutine/function nor spawned goroutines automatically recover from panics. They DO shut down the entire server if any code in them panics (provided that no boiler plate recovery is performed at the root of the call stack, which in itself would make it very, very hard to reason about the consistency of the data that might have been touched before panicking at any one of countless of operations in the code in the call stack).
Therefore, one might also argue: Should the entire server shutdown if any worker thread causes a panic? I do agree that it is more plausible for an HTTP request thread to do so, but not enough to change such a basic behavior. Go doesn't allow to register a global panic handler to be able to perhaps recover but also log panics in a consistent way, such that it would be applied consistently across your entire process and customizable to the preferences of the developer as to their chosen trade off between "fail fast/never continuing processing in the face of unexpected programming errors" and "an unexpected programming error occurred but I still want to continue executing and hope that nothing broke in my application".
And I do acknowledge that different developers/organizations would want to make that trade off differently, but at present it is not very convenient to do consistently. The Go creators chose not to allow global panic handles (there are a bunch of discussions about it on Google Groups and similar, and I do agree with some of the arguments in them).
Some people (myself included) might prefer that the application fails and whatever orchestration manages this application triggers an alarm with operations staff, developers, etc, without instead risking that an application keeps running and perhaps due to some invariant now being violated and data inconsistent keeps making mistakes, perhaps serious mistakes.
This of course depends a lot on what kind of application you're building and how important this is, how much uptime for partial (but potentially buggy) functionality weighs against never risking serious mistakes. I tend to think that the correctness of most software in the world is actually important these days, but I fully admit there's a scale. Go however is being used to build all kinds of software these days.
If one doesn't like that strategy, and wants to build software that recovers in other fashions then perhaps one should have a look at Erlang and its process supervisor trees, or other systems with other trade offs.
It is a genuinely hard question, I admit that. I just don't think Go's stdlib in this case chooses a position on that trade off that I like, that's all. It's all opinion, and we're all entitled to them. Thanks for asking, and forcing me to put thoughts into words!
This does seem very interesting, at first glance. One thought:
> These “scoped life-time” reference counts are used by the C++ shared_ptr⟨T⟩(calling the destructor at the end of the scope), Rust’s Rc⟨T⟩(using the Drop trait), and Nim (using a finally block to call destroy)
So Rust's non-lexical lifetimes doesn't remedy this? Meaning the actual drop of xs needing to occur at the end of the example in the beginning of section 2.2, as opposed to right after the map. I would have thought that ys borrows nothing from xs, and the drop can be inserted right after map? Perhaps it's too early in the morning for thinking for me.
Edit: As varbhat has edited their post to clarify that they indeed did not mean to say backing in an adversarial way, my post can be ignored or just read as my understanding of the space.
Apologies if I misunderstand your post, in which case you can disregard my entire post, but -- I assume you mean as in adversarial backing?
These companies use practically all of these languages but to a different extent, for different uses, and for good reasons.
I currently get paid to write C and Go but I dabble in most things, and I'm certainly the happiest when I get to write Rust and ReasonML/ReScript. Having said that, I personally think it's fair to guess that Swift will continue to shine mainly as Apple ecosystem language, Go will continue to shine mainly as network server language and Rust (and whatever follows it) will continue to grow for network servers but will also slowly seep into our shared foundational layer of software. Slowly, slowly, libraries, kernel modules, runtimes, stdlibs for other languages, frameworks, etc. This a good thing, because it slowly rids us of old C and C++ code bases that may not have kept up with the times
Yes, all of these languages are being used to write excellent CLI tools, graphical applications, etc and that's wonderful (I use many of them), but none of these languages are realistically competing to kill any of the other in any definitive way.
Now if only Go would give me sum types (ADTs), my life would be even easier :)
I recently "closed" 12000+ (not a typo) tabs in one of my Firefox profiles into OneTab. In another profile I have 30+ different windows open for different projects and contexts, and I have more profiles (typically one per WM workspace which are topical).
I've tried lots of browser addons/extensions and tools over the years, but none offered proper salvation. I have a design in my head of what kind of tool I'd like, but like everyone else I have no time between work and everything else to build it.
Long ago I used to blame the tools, but these days I realize it's me. I put minimum 32 GB RAM into my boxes, and it's because of VMs and browsers.
Yours is an excellent comment, factual and even handed. As someone who had to delve into these things and had to implement e.g. a variant of MultiPaxos for work (this was before Raft existed), I agree that there is a bit of unhealthy confusion going around.
We generalize every day because everyone can't be expected to learn everything, so we abstract away, define constraints and best practices and we select our library-fied tools and apply them. We do this with operating systems, file systems, encryption, and also with distributed systems. The basics of distributed systems and their invariants and trade offs are not that hard if you give yourself time to study them properly, but when you want high performance and scale it does get Challenging.
It is important that any abstractions we make, any generalizations, constraint definitions and best practices that we expect people to read, accept and adhere to are as correct as possible without breaking that abstraction. So thank you for your comment. Please note that mine is not a pro-Paxos comment either, just one appreciating good information being spread so that people can make good choices and trade offs.
Distributed systems are hard. To paraphrase: In the data flow, no one can hear your canary msg scream.
> React developers do not understand functional programming
As some sibling comments note, this is not a fair conclusion to draw. And not that it disproves your statement, but Reacts original creator Jordan Walke wrote the first React prototype in SML. Not understanding functional programming is not on the list of things I would ascribe to him. He's a smart guy.
On a slightly different note, I'd recommend anyone try out Reason. It's slowly maturing and can be a real joy, at least compared to JS/TS.
Yeah, I'm not really plugged into the community that well but I think reasonml.org is the long term plan and that some sort of transition might be going on? I've mostly used reasonml.github.io and Bucklescript docs, but lately I've started using the Discord channel and it's very welcoming, friendly and helpful!
Yes, there are a lot of us out there who use Signal and have this single missing feature as our largest pain point. My previous phone (iOS) would have been donated or sold to someone who could make better use of it, had I been able to actually get years and years of Signal conversations (with media) and memories out of it. But I can't (without prohibitive amounts of manual work), so it lies unused in a drawer waiting for the day I might.
The Signal devs don't discuss their roadmap, as is their prerogative. The result is of course that no one knows if such features are even planned, let alone worked on. Half a decade (?) of sad and frustrated forum posts and GitHub issues attest to that. I scan through them from time to time to see if there's any word.
But! There was actually a tweet from Moxie just a few weeks ago in a thread started by Matthew Green, I think, hinting that they might be working on it. It did make me a little happier. But yes, five years is a long time to wait for this feature, and we don't know for sure if or when it's coming. Me, amidst all the frustration I am very happy for the software they are giving me almost for free (I've donated a little bit).
By the way, Josh, props to you for your patience and professionalism in the debian-devel thread about librsvg the other day.
Oh, wow. I stumbled over the thread just the other day and mechanically just read the month and day... Since it was November I somehow assumed it was recent without reacting. Thanks for the correction! Well, belated props to you then. =o)
Do you mean to say that you think that the OpenSSL project feeling obliged to implement the heartbeat extension created by a standards body is significantly more to blame for causing Heartbleed than the (understandable) causes for the general quality of the OpenSSL project code base (like lack of funding, etc)?
OpenSSL is one of a number of projects (this is maybe more prevalent in Free Software but it's hard to tell) which takes the approach of a hoarder rather than curator, it has got better, but that's definitely how it got to where it was when Heartbleed happened.
In these projects rather than try to solve some particular problem or group of problems and use standards on the path to that solution, the project just throws together whatever happened to attract somebody's interest in a standard into a big heap of cool toys without rhyme or reason.
I think we actually could have blindly got lucky with Heartbleed, it could easily have been the case that to make this extension work you needed to add 40 lines of custom code to every program even though it would always be identical boilerplate code. After all it took them years to add a sane API for "Just check the bloody hostname in the certificate matches". But, that isn't how it worked out.
If you compare Python's "batteries included" philosophy, OpenSSL and a few other libraries take something closer to: "I just keep everything in this old cardboard box, try looking in there?". And sure enough there are batteries, although they seem to be covered in a sweet-smelling sticky substance, there is also a broken Gamecube, one cufflink with a brand logo you don't recognise, a chocolate bar dated 1986, a PS/2 to USB adaptor, a C60 cassette, two dried-out PostIt notes, one sock, a 40cm USB cable with a mini-B connector, and the spare fuses from a 2005 Ford Focus...
Similarly, a charitable interpretation of what kubb wrote would be that they are referring to those who might have been dishonest in their defense of Go's lack of generics, which one might say kubb does indicate by using words like apologetes and zealots. The Internet is full of people who pick a team and will say dishonest things in perceived defense of it. I agree with kubb in this regard. That is the charitable interpretation of what kubb wrote, but instead you assumed kubb referred to everyone who ever voiced that opinion and suggested kubb should refocus on what kind of person they want to be.
The Internet needs more of charitable interpretations, and HN in particular. Perhaps I failed to interpret you charitably now? Nuances get lost easily in online debates... :)