Hacker News new | past | comments | ask | show | jobs | submit login
Go Creeping In (tbray.org)
298 points by mpweiher 4 months ago | hide | past | web | favorite | 308 comments



I am not a Go programmer, but every time there is a discussion about Go I want to link to the article on “Language Design in the Service of Software Engineering”: https://talks.golang.org/2012/splash.article

It explains so many of Go's design decisions and why it is the way it is (e.g. why something like gofmt/gofix is among the best things about the language); I wish there were similar articles on the design decisions behind more languages.

The first point in this (OP) post is also explained by the same philosophy: as far as Readability / “I can pop open almost any .go file and [can] understand it pretty quick” goes, it is a powerful thing to be able to navigate to pretty much any team's code and not only understand their code but also be able to focus on what's specific to that code rather than on idiosyncratic stylistic differences. In other languages this is done with a lot of adherence to the style guide; if Go tries to make it even easier, this would make sense.


At my current job, most of the backend code is divided in two separate groups. One uses Java while another is built with Go. Both have teams have very good reasons to choose what they did and both languages are serving their purpose pretty well but one of the major issues with the Java team I've seen is how discussion there is around styling the code. Most PRs have some comments about some styling and it gets a bit worse every time a new engineer joins the team. I think what you pointed about Go is one of it's best features if not the best.


The Java team could use EditorConfig, CheckStyle, FindBugs, ErrorProne, or other tools to enforce team rules similar to how the Go team uses tools like gofmt.


I think the part here is that there aren’t tools “like” gofmt - there is only gofmt. Other tools wrap it, but there’s no choice (and bikeshedding) to make.


If the team puts in place a code formatter/linter tool there won't be bikeshedding for them either. If they're happy to let google format their Go code they can let google format their Java code too: https://github.com/google/google-java-format


We had a "discussion" on which formatter/linter to use for a different language. There were 2 choices and I bet we spend 300 man hours on nonsense. Everyone had to have their say.

You can decide TO use a linter, but then you just get to argue about which linter...


The fact that there are multiple linter tools shouldn't paralyze a group of people for 300 man hours. The Go ecosystem will eventually have choices too. For instance, I think there are a couple of different protobuf implementations for Go already.


It shouldn't but it does. See Wadler's Law:

https://wiki.haskell.org/Wadler's_Law


Seriously? If something like this happened where I work probably everyone would be let go. Sometimes I kind of envy the freedom of wasting your work time just for nothing... But honestly I still prefer to do something that helps the users rather than measure each other “manhood” in this nonsense...


I think that a proper IDE had this features for more than 10 years. Honestly to me it seems a huge disadvantage to have to run a command line tool outside your work environment to get the same.


Also gofmt is not configurable.


There is a much better formatting tool. It’s called F#.


There's a big difference between standard styling and optional styling.

Yes a single team could decide on a custom style, however when you need to cooperate with other teams, it takes a lot of wasted time in politics to agree upon a style.

It's also harder to read random code written by others, e.g. open source libraries. You can't just glimpse at it in the browser, you have to download the code and run it through your custom style applier.


> There's a big difference between standard styling and optional styling.

THIS.

This is not stated enough in the value of gofmt - it's the standard formatting style for everything. I've worked in Java and C++ for a couple of decades, and while every team has had its own style guidelines, and occasionally tools to enforce them, even different teams in the same organization would have difficulty understanding each other's code.

The value of gofmt is that it's the only formatting standard, thus I never have to learn a new formatting style when reading the code for an open-source library, or coming in to a new job. It's exactly the same as what I've been reading for the last 3 1/2 years of working in Go.


I proposed editorconf to them but I think were still discussions and debates. Not sure if they've settled on anything yet.


I'm not a huge fan of go for the type of work I usually do, (Line of business/enterprise apps with really strict timelines) but this is definitely one of the best things Go did right, and I hope other languages learn from them.

I think the styling issue is even more important than on first glance because the way a lot of people do code reviews is they look at code until they find a few issues to comment on and then mark it approve. If they find 3 styling issues they might just make those comments and not search for the deeper issues that actually affect the client like bugs, ways the code could be made more defensive, or performance issues.


Even in a team of a single person, go's opinionated approach to styling is great. I can still maintain code I wrote when I was a go noob and no cringe when I read it. It's gofmt that made me fall in love with go.


The java team can just switch to Kotlin and improve immediately their readability and expressive power. I don’t think that the go team would switch to another language to do the same...


To add to this, I appreciated that C++ had this, too: http://www.stroustrup.com/dne.html

Was very insightful, in the same way your link was.


If everything looks the same, experts' code looks just as bad as novices' code. If you omit every feature that novices can't figure out how to use, what you're left with is a weaker tool that doesn't give as much leverage as it could have. I want more powerful languages for harder problems, where getting better pays off in clarity and reliability.


Readable... You mean compared to an IAbstractFactoryFactory? ;-)

I'm only half joking as I've seen time and again indirection almost for the sake of indirection. I've seen C# projects where an IRepository is injected into a Controller where IRepository only passes data to another injected IDatabaseHelper... Ripped all the IWhatever out, and injected the DatabaseHelper directly into the BaseController, and voila 80% of the codebase gone, and still just as testable... much simpler to follow too.

In the end, I think that a lot of developers read patterns books, or see them used... then start to apply them everywhere, even when they make no sense. I've had arguments where the counter is, "this is how devs expect it to be." As if that's a valid argument for making things more complicated for no benefit.

It isn't that you cannot write simpler software in the main languages used, it's that developers/architects don't. There's something to be said for only abstracting when it makes using the abstraction simpler.


The most important property of Go for me is that the language is not red/blue. [1]

This enables I/O interfaces to be truly universal, covering just about everything from files on disk, to pipes, to in-memory buffers, to sockets. This feature facilitates a style of, for lack of a better word, generic programming that is hard to come by in other systems.

I find basically any other language or platform except Go lacking and unpleasant in this regard, due to the viral nature of asynchronous functions, which never disappears entirely, no matter how much syntactic sugar is sprinkled on top of it.

Things like mismatch between event reactors or asynchronous frameworks do not exist in Go. Interfaces Just Work, and the entire ecosystem uses them.

[1] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


> The most important property of Go for me is that the language is not red/blue.

This initially attracted me to Go as well. Unfortunately in production apps your functions get colored by their `context.Context` argument to support cancellation. Unfortunately `Context` is viral because it needs to be passed down from `main` down to virtually all blocking functions.


I've never been a fan of the "what color is your function" essay, because it implies that Go is in some sort of unique space. In fact, Go just uses threads. There's no semantic difference between Go and pthreads. The only difference is that Go has a particularly idiosyncratic userland implementation of them.


While "what colour is your function" essay highlights the author's pet peeve, asynchronous functions, I always understood it to be less about the underlying implementation, but about syntax and semantics; the whole point is that control flow ends up infecting function-level semantics. That problem extends to anything else a language can treat at "coloured".

For example, in Haskell, side-effectful operations end up being "infected" with the IO monad. This means you're not free to mix and match functions — the moment you need to call some IO function, all callers up the stack need to be monadified, too. This might be a late change — suddenly you need a logger or a random number generator, and it has to be passed all the way up from the outermost point that uses monads. In practice, monads are so deeply ingrained in Haskell now that most devs probably don't see this as a colour problem.

Multi-value-returning functions in Go is another example of colour. The only way to use the return value of a Go function that returns a tuple is to assign them:

  value, err := saveFile()
  if err != nil { ... }
This means functions like these aren't composable. I can't do saveFile().then(success).fail(exit) or whatever, like you can in Rust. The moment you have a function returning more than one value, your only option is to create a variable. It's weird.

Interestingly, you can do this, but I've never done it and never seen it in the wild:

  func foo(v int, err error) { ... }
  func bar() (int, error) { ... }
  func main() {
    foo(bar())
  }


I figure the main feature of goroutines is that it is considered acceptable, for whatever reason, that a library function spawns helper goroutines without telling you (as long as it reigns them in somehow and they don't leak or anything weird like that). If eg. a random C or rust library spawned threads for random tasks without explicitly being a concurrency thing, it would probably raise a lot more eyebrows, no?

In the end this is probably because of the possibly superstitious belief that goroutines are free whereas threads need to be carefully budgeted for, and maybe somewhere between premature optimizations and designing for very niche scalability requirement, but subjectively the result is still that goroutines are "available" in a lot more situations than boring, pedestrian OS-level threads.

I feel like that counts as a semantic difference, even if might be social more than technical.


An implementation with buy-in across the entire ecosystem and language so that you don’t have some systems using threads and other systems using futures and other systems using different reactors, etc.

Also known as the point of the article.

Additionally, that implication is entirely of your own creation. The article explicitly lists many languages besides Go:

> Three more languages that don’t have this problem: Go, Lua, and Ruby.

Perhaps you just have an anti-Go bias?


Last I tried it, if I had a million goroutines calling stat(), Go would attempt to spawn a million kernel threads. So I rolled my own rate limiter, bleh.

Is that still the case? (Happy if not)

If it is still the case, is there a standard solution or is it up to the app author?


It's up to the application author in that case, unfortunately. stat() enters the kernel and resolves in one shot, so it requires a whole thread. I haven't read into it very carefully, but on Linux, perhaps the new io_uring business is going to change this state of affairs. For now, however, you need a semaphore of your own.


I wish I understood what problems people solve day in and day out such that they need to call IO in the middle of Dijkstra’s algorithm. Most "business logic" ought be pure functions over persistent data structures with no IO other than the occasional logging. At the last possible moment, 5% of the code wires in IO in some boilerplaty way. Is that sufficiently pervasive to worry about it?


> what problems people solve that they need to call IO in the middle of Dijkstra’s algorithm.

I worked on several problems of this nature at Twitter in 2012. Hopefully there’s a better way to solve them in 2019...prolly not, but maybe.

say you want to find the median of the number of followers a person on twitter has. so that should be easy - make 1 dataframe with follower count of each bloke and call median() - well, there’s some 300,000,000 blokes, so not that easy :) You have to make a dataframe via ETL - reading & writing to disk 100s of times, loading a few thousand users each time, distributed median computation. so a silly sub-second median query took 2 months to code up & debug & ran for a few hours due to so much IO.

another much harder problem - you want to find the median number of hops between one user & another. so now you have 300m x 300m tuples as your result - where & how to store them is in itself a monstrous challenge. but how the heck do you even compute the result ? you read in one tweet from john to steve, so that’s 1 hop from john to steve & viceversa. you then read a second tweet from steve to mary, so that’s 1 hop from steve to mary & viceversa, 2 hops from john to mary & viceversa. in this manner you read 100s of billions of tweets & keep updating hopcount. somewhere in there john sends mary a tweet - oh fuck now the hopcount is 1, not 2. this will then change lots of other hopcounts. in theory there are nice graph algos for this sort of thing. but in reality, your data is billions of tweets constantly increasing, stored in distributed compute clusters across the planet & just getting a handle on all this can be a 6 month project for some lucky scientist who got to work on this.


> I worked on several problems of this nature at Twitter in 2012. Hopefully there’s a better way to solve them in 2019...prolly not, but maybe.

Okay, so Twitter has scale. To a first, second and third order engineering approximation--nobody else does.

If you are a mere mortal writing practically anything, pull it all into memory, operate on it to create another copy, destroy the original copy (or let GC kill it).

Embedded programmers might get a pass on this given limited memory (32K RAM)--but that same kind of attitude is getting more and more essential as you start getting Big/Little core mixes on the same chip.

Computers are mind-bogglingly powerful.

I have been completely stunned at how many transactions Nginix+Django+PostgreSQL can actually handle before you need to start thinking about "scaling".


> I wish I understood what problems people solve day in and day out such that they need to call IO in the middle of Dijkstra’s algorithm.

For Dijkstra, imagine a very large graph that cannot fit into memory, where you'll need to go out to disk or network to compute the distance of two nodes (or fetch part of the graph, etc).


The business logic many (perhaps most) programmers work on is primarily occupied with gluing together multiple state stores (databases, caches, message queues), running some very simple computations, and writing the output to some IO sink (often a web response).

Sure, you could extract the computations part out. But that barely moves the needle on testability/cleanliness, because most of the business (or business-value-driving) logic is the data flow management--highly stateful IO coordination.


Yes. So much business logic requires IO. Pure business logic is a great dream, but it's a dream.


I/O, both disk and network, can happen almost anywhere because data doesn't always fit in memory, may be streamed from a remote source, and I/O handling cannot always be deferred indefinitely. A significant part of systems software design is accounting for this reality.


One of the primary issues I run into this is that some parts of the domain logic determine what data you need from the database and this logic can't be easily moved into sql(or however you're specifying what data you need back from the datasource).


That link has a great punchline. "All of this is easy with threads which never cause any problems!" I haven't tried Go, but this seems certain to be an exaggeration.


> I find basically any other language or platform except Go lacking and unpleasant in this regard, due to the viral nature of asynchronous functions

That's...odd, because while Go lacks that issue, it was by no means unique among languages or frameworks in that regard when it was introduced, and isn't now, either.


This is the same reason why I love C, and why I think that, for projects larger than a 1000 lines, it allows me to be MORE productive than more sophisticated languages. Possibly even resulting in a smaller linecount, but at the very least less accidental complexity.

One small red/blue pain point in C is varargs - for most varargs functions I have to make a "..." and a "va_list" version.


When using a C library, who's code is responsible for allocation objects? Who's is responsible for freeing said objects? That alone leads to a LOT of issues.

Not to be evangelical, but give Rust a look.


Seriously, all these language mechanisms like RAII make it a lot harder to write modular code. Look at the mess that C++ got itself in, with its constructors, default constructors, move constructors, rvalue references, exceptions (required as a consequence of constructors), and what not. It's hard to believe Rust could make it significantly less painful.

Programming is mostly not about initialization and deinitialization. If it is, you're doing it wrong, you have too many small objects.

Yes, stack allocated STL containers can be nice for quick "scripting". But I will happily write a few function local deinitialisations to enjoy much less convoluted and interdependent, slowly compiling code.


Rust makes it less painful by not having constructors, default constructors, move constructors, rvalue references, or exceptions.


Programmers who write CLI apps/network-services tend to love Go. Programmers who love mutable, imperative programming love Go. Programmers who find the CSP model a natural fit for asynchronous programming love Go. Programmers who want the stdlib to contain what they need for 80% of use-cases love Go. Programmers who love checking `if err != nil` in every statement love Go.

Programmers who write re-usable libraries (eventually) come to hate Go. Programmers who leverage Sequences/Streams, immutability and functional programming tend to hate Go. Programmers who prefer exceptions or railway-result types for error handling also tend to hate Go. Programmers who desire tight control over performance and memory management will obviously hate Go.

In my personal opinion, the Go language is handicapped to claim simplicity. However, the Go standard library is amazing for productivity. If Go the language, didn't come with its stdlib and wasn't pushed by Google, you wouldn't have so many Go fans.


> Programmers who leverage Sequences/Streams, immutability and functional programming tend to hate Go.

I dunno, applying the Reader/Writer pattern to other objects in order to get sequences & streams works pretty well in Go.

> If Go the language, didn't come with its stdlib and wasn't pushed by Google, you wouldn't have so many Go fans.

I think the latter part is very true. Go had the very good fortune to be a Pike tool put out by Google back when, ‘hey, Google are producing this!’ was a good thing.

I like Go, but I also like Plan 9 — and it never took off. Neither did Newsqueak or Alef. But Go did, in large part, I think, due to Google’s positive reputation a decade ago.


The Reader/Writer interfaces are beautiful in their simplicity, but they're only useful if you want to stream raw, non-Go data. If you want to iterate over a slice or a channel, basically as soon as your data is fit into Go datastructures you don't have a simple and beautiful pattern for working with them as a flow of data.


You can always implement a FooReader, which iterates over a flow of Foo objects (which might, for example, be items in a slice or channel). We do this all the time. It is, as you note, quite simple and I think elegant too.


Yes, I have done this, but I was talking more about streams that I can splice, filter, map, join, pair, slice, window, buffer, groupby, scan, skip, take, switch, zip, reduce, etc.

Not really possible without generics, lambdas and a good core of functional routines.

Once you get used to this and the speed, productivity and elegance with which problems can be solved, using Golang feels a bit primitive and painful. The same Go code is usually 5x-10x longer.


>Programmers who desire tight control over performance [...] will obviously hate Go.

Curious, why do you say that? Go tries very hard to prevent abstracted, hidden costs like (con|de)structors, so performance is quite obvious. Could you explain a bit more?

I understand the memory management part.


The golang gc is not tunable, at least compared to the JVM GC for example. It is tuned for latency, not throughput. So, if you're writing high-throughput code, you can't really do much. You can try to write code that reduces allocations, but so can you in Java or C# (even moreso after Java gets value types).

Not to mention that the golang gc cannot handle GBs (let alone TBs of heap space) the way the JVM routinely does.

The golang runtime cannot devirtualize interface implementations the way the JVM does[1] or .NET does.

I'm sure there are more examples to think of.

[1] https://youtu.be/lunJmMBkqLo?t=1072


Go GC can easily handle GBs of heap, especially if you're careful with pointers.


I benchmarked etcd with 10G-50GB heap space and it didn't do a good job at all, lots of CPU time spent in GC.


The performance of the etcd system is complex[1], there’s raft consensus, replication, security, snapshots, networking, and basically a lot going on.

Perhaps, your benchmarks really do reflect problems that go has with gc, but perhaps some other part amonst the numerous moving parts are causing the poor benchmark results. I’m not sure that we can conclude from your results that go has gc problems when the heap is large.

[1] https://github.com/etcd-io/etcd/blob/master/Documentation/op...


From what I recall, having a large-ish number of watchers resulted in memory usage to go up significantly. The gc started thrashing as CPU usage went up. Put performance dropped with gc cycles.


> Programmers who write re-usable libraries (eventually) come to hate Go.

> the Go standard library is amazing for productivity.

In conclusion, you're not good at writing Go libraries.


That's a trend I witnessed first hand while working as dev for a bank. Old Java code slowly being phased out for Go. Which to me seemed like madness at the time due to lack of generics but it turned out to be fine, just needed a different mindset. This often resulted in a much shorter edit-compile-run loop and halved the hardware requirements, specially memory.

These days I'm not even surprised when new clients demand Go in my current consulting firm.

I think there are "better" languages than Go but they also come with their own tradeoffs and from what I've seen, to many it seems Go is preffered after weighting in.


When I was being exposed to functional programming I absolutely loved it. However, when challenged with rewriting non-functional in a functional manner, all hell broke loose.

It's not a complete rewrite, it's a completely different mind set. Almost as it's simpler to just start fresh with the problem at hand.

I get the same feeling with Go, if you try to copy what you've done in other languages you will be enraged. If you embrace the new mind set, it's nice.

Rust is nice, but it's a fresh breath of air to not have a weird syntax to work with but proper words. But maybe I've been too far away from C for too long of a period.


This is interesting. At the Banks was this driven by internal staff, or contractors?

I’m considering contracting, any particular banks that you thought were more competent?


In my case the Go adoption was spearheaded by a team lead from their staff.

As for what banks are more competent or dare I say modern, it can vary wildly even in the same bank depending on what piece of software each team works on. From my experience banks are (understandably) resistant to change so you might have a hard time finding greenfield projects there. In general, I'd say go for banks only if you're interested in quick money over career development. It can be soul crushing at times.


Disclaimer: At a very large US bank.

Seconding the assessment. We have pockets of everything from legacy mainframe you've probably never heard of (no, not every mainframe is an IBM) to cutting edge cloud/docker/devops/agile/etc. Just depends on where you can be plugged in.


It’s amazing — amazing I say — how little generics are missed.

Ooof. I've just spent about 3 days debugging Go code that was copypasted x N, with minor (incorrect) modifications. Rewriting it with generics would have been about 30-min job and made it much more maintainable in the future.


This is particularly bad with channels. If you look at a book like "Concurrency in Go", the apparent simplicity of the language leads to patterns are far from trivial, and the lack of generics means that it's impossible to implement them correctly once and reuse them for any data types... so everyone ends up having to implement their own, which can be fun, but pretty bad for correctness.


Yeah, I wish more people would accept that there are clear tradeoffs between different type systems.

The tradeoff in Go seems to work reasonably well for services that doesn't want, or need, to handle complex application domain as fluently as concurrency. Essentially it works well for network/systems level code, and anything that looks like that.

But for code that is more of an encoding on an application/service domain, the lack if generics hinders reasoning more than it helps. There you have no use for the fancy concurrency stuff, it has to be abstracted away for the domain experts to be able to be productive. There are, usually, also enough code that global readability become less helpful, and unambiguous contracts/types across systems becomes moreso.

The latter case is massively helped by any way to enforce contracts over methods and types, as long as refactoring tools can reason over it. Nevertheless I do believe that locally (in methods/functions) a well behaved type inference and IDE tooling is better than extremely verbose type statements that I then make the IDE hide from me!

To me these are clearly different scenarios, but apparently not to most people, or these discussions would rarely occur, but we would have discussions about the characteristics of languages making them suitable for any specific task.

After all, for every solution there exists a possible language where the solution for a particular problem is part of the language, trivially making it the best language for that problem.

Also trivially, there exists a best language for me, personally, to solve any problem. This is the hypothetical language which encodes and compensates for my individual perceptual and cognitive bias.

Now, in a large enough enterprise, the most important contracts, and hence generics, might be expressed in the outward facing interfaces, and thus making them mostly redundant in a single service. Sharing implementations becomes less important at this scale, as it creates problems on it own. Hiring and onboarding masses of juniors are commonly a problem here.

Hence, except for networking/systems styled solutions, Go might also be well suited for very large organizations with endless hiring needs, and a desire to look more modern.


>Essentially it works well for network/systems level code, and anything that looks like that.

You can also see this in the testing story. Go is very focused on unit testing which makes sense when you are one cog in a giant machine. But if you're writing something standalone the friction to write end to end integration tests is what turned me off.


Copypasted code is just bad code, no?


A lot of the time in Go you do have to copy-paste-tweak - sorting lists for example requires quite a bit of boiler plate.

That doesn't mean the code above is good, just that the lack of generics sometimes makes copy-pasting in Go inevitable.


Can't you just do something like qsort()?


Technically yes, however it is not type-safe. Idiomatic Go usually tries to be at least somewhat type-safe, and so the standard library's sort package uses interfaces instead of comparison functions. This is a bit more tedious to write unfortunately.


I have had much success with Go for the past 5 years.

Simplicity is by far the most powerful feature. If I have to hire someone right out of school, they can pickup the language very quickly.

If a team member leaves the company, I can still read and understand the code they wrote.

Over a long term horizon, the features of Go make it easier to maintain the software written with it.


This is the killer feature of Go, and it's what so many who clamor for various features-of-other-languages miss.

Go is designed for simplicity. It is easy to write, but more importantly is easy to understand, even far in the future. For a given problem, n programmers will generally coalesce on a small number of possible Go implementations, whereas other languages will often yield n divergent solutions.

I've had Go solutions that I built years ago that I can get into and maintain in seconds, even when I hadn't touched Go in months.

Go isn't my primary tooling (my life today is mostly Swift), but as a comparison where a language has gone awry I'd point to C# -- as Microsoft has tried to incorporate whatever cool thing every other language has added, C# projects have often turned into write-only projects where the original author barely can grok their code a month later, and everyone shrugs and says "rewrite it" when confronted with another programmer's C# code. Rust is much the same.


Can you give an example of Rust incorporating "whatever cool thing every other language has added"?

I have never had the experience of Rust code becoming write-only after a month, and I don't know anyone else who has either.


That statement was clearly in reference to C#, which evolved from a beautifully concise language to now being a mishmash of every other style of programming (including the abomination that is LINQ).

Rust, though, suffers from being too clever, making it a language in which you can make concise, powerful solutions, but it can making grokking the code difficult, inefficient and error prone.

This is where our egos step forward and we announce that it's No Problem For Me, but we've all experienced a situation where we've leveraged a powerful language to do some amazing things, and then a month later it is a gruelling, headache inducing experience trying to figure out what is doing what, where, so we announce that we'll just rewrite the whole thing.


> Rust, though, suffers from being too clever, making it a language in which you can make concise, powerful solutions, but it can making grokking the code difficult, inefficient and error prone.

Outside of the borrow checker and invocations relateeld thereto, Rust is fairly simple and transparent. The borrow checker forces you to think through things which are common sources of error before you can code things, which is mostly good, but a necessary cost of that is you also have to learn a whole vocabulary, symbology, and analytic approach around it.

While that does (at least until you've climbed the borrow checker learning curve, a place I haven't gotten past myself) make it harder to grok some code and, even moreso, makes it harder to write some code, I don't find that it promotes false understanding of code as much as very clear lack of understanding of what bits of unfamiliar code are doing.

OTOH, while there are simpler approaches to subsets of the same issues, I don't know that there's anything which really offers what Rust offers at lower cognitive cost.


> Rust, though, suffers from being too clever, making it a language in which you can make concise, powerful solutions, but it can making grokking the code difficult, inefficient and error prone.

Can you explain specifically how Rust is supposedly too clever?


With all due respect, I don't think you're in a position to have a unbiased perspective on this, and I am certain I'm never going to convince you.

But I'll leave it that there have been countless "Rust is too difficult" links for you to find on Google, many of which have been posted on here. And invariably when we discuss Rust the conversation goes like this-

"why is it so complex to do [some simple thing]?"

"Oh well Rust is a System Programming Language, and it's doing important things in important ways"

"Okay, but I'm not building an OS, I'm just..."

[chest puffing, ego's appear, etc]

"Well I certainly have no problem with it..."


I’ve found Rust users to be very helpful, especially on Hacker News.

For example, in this thread from two weeks ago someone asks why it’s so frustrating to concatenate strings of mixed types in Rust:

https://news.ycombinator.com/item?id=20124821

And Steve Klabnik offers a friendly and ego-free reply with a simple and helpful option to do just that.

I’m sure there are other examples of flame wars, but I wouldn’t associate Rust advocates with chest puffing and egos at all.


I still can't figure out exactly what design decisions you disagree with, but it sounds like you're objecting to the lack of GC or memory safety. There are some applications for which GC is not appropriate. Maybe you don't work on any of those applications. That's fine, but that's very different from saying Rust is overly complicated.


I think this is a great point that indirectly highlights how people are often talking past each other when a go vs rust debate comes up. People work in different problem domains with different constraints and therefore the a good language choice in one domain might be wrong for the other.

I don't think anyone in the go community would suggest you write a browser in go. Ignoring the GC performance issues, just the awkwardness of using cgo would make most go developers shudder.

Likewise there are many problem domains where a GC'ed language works just fine. In these cases it is appropriate to consider other factors when deciding which language to use. Perhaps your team is made up of a bunch of experienced scala developers, scala might be a good choice for you over go or rust. That doesn't mean your good choice is universally applicable to all teams.

Some teams might care more about getting the most value per cpu cycle possible. Other teams might care more about the time it takes to get a new engineer up to speed in the language. Both of these can be valid reasons to choose a specific language.

I guess what I'm trying to say is: "Try to choose the language (and other tools) that best matches your organizations constraints. Don't assume that other teams have the same constraints."


"too difficult" is not the same as "too clever" or piling on too many features and encouraging errors.

Do you agree with the sibling comment of "Outside of the borrow checker and invocations related thereto, Rust is fairly simple and transparent."? And the assertion that while it's difficult it doesn't make you misunderstand code?


It is kind of sad to see a simple question answered with a personal attack.

This kind of answers doesn't belong here imo.


There is no personal attack in the comment you replied to.

Trying to inject an invented narrative to quiet comments you don't like absolutely doesn't belong here.


> "why is it so complex to do [some simple thing]?"

To be fair to endorphone, this does happen quite often - mostly, because people often fail to immediately grok the sorts of idioms that will easily pass Rust's borrow checker. As a general rule, it's something that is solved by properly dealing with shared mutability and shared ownership. But failing to deal with these is a pretty bad code smell anyway, so IMHO there's nothing wrong with a language which prompts you to do that! Hopefully, we're not going to equate avoiding blatant code smells with "chest-puffing" and "ego"!


I almost never use the LINQ syntax, as the helper methods with lambda expressions make a LOT more sense to me.

I'm pretty new to Rust and can understand the critique... I think the biggest issue really comes down to a lot of code written, and an ever evolving canonical way of doing things. I think it'll get better with time, and does have some nice features for workflows that are difficult in Go.

Personally, I'm a huge fan of Node/JS as I just get stuff done. Most of my work doesn't need the ultimate in memory/performance as a lot of it is just gluing things together and it does very well as a scripting environment imho. Windows, Mac, Linux and very little friction. I like JS as a language and will leverage functional, some oop and procedural styles in different aspects of an application where they're a better fit for that aspect. Most languages aren't that flexible.


Its interesting to see what languages become the backbone of different industries. Here in Houston, .NET rules, and I have seen a huge shift towards .NET core and cross platform. Im in the middle of a big build out right now, and we have begun building .NET core services that don't use IIS for our service layer.


Here in Germany, at enterprise level it is pretty much .NET, Java, C++ and Web, while most mobile apps tend to focus either on mobile Web or hybrid solutions (Xamarin, Ionic, C++ with native views).

Everything else are just small campings.


I just don't think Go has a place in new code. I know Googlers (who shall remain unnamed to protect the innocent, and politics at that company can kill your career) who joined the Go fandom because its their company's language, their department was C++ or Google-flavored Java heavy (arguably, a Java style worse than 'normal' Java or Android's dialect), and it was their only way out...

However, when they were exposed to Rust, a lot of them converted to Rust, and now write Rust for Google on internal apps that need performance but safety, especially during concurrency. Others were exposed to Kotlin (especially due to the Android guys), and are championing that as a replacement for Google-flavored Java inside the company, since the JVM is also very sane when it comes to large scale safety during concurrency.

So yeah, if I was writing brand new highly concurrent coroutines-like code, I'd skip Go and Goroutines, and go straight to Kotlin and Akka (not written in Kotlin, but very natural to use with Kotlin), since I end up with the power of the JVM, the elegance of Kotlin's syntax, and the Erlang Actors brought by Akka.

And for those who have never used Erlang, learn it once, write something highly concurrent in it, and you'll learn to distaste almost every other language's faulty concurrency idioms almost immediately. Without the Erlang actors model, you're not going to be writing fault tolerant concurrent systems; just like "all languages converge until they become LISP", all concurrency systems converge until they become an Erlang actor model.


I'm personally not a big fan of Go due to the weak type system, with lack of generics as the main offender. And as a result, the ubiquitous `interface{}` pattern, stringly typed annotations, ...

BUT Go offers simplicity, incredibly easy concurrency, decent performance, a good standard library for networking/http services, very quick compile times and a stricter type system than many other languages like Python/PHP.

For many teams, especially coming from more dynamic languages, Go fills a sweet spot of performance, concurrency, type safety and ease of use.

I love Rust, but it is much more complex, boilerplate heavy, adds more friction, is harder to learn, and concurrency is much harder to get right, even with the upcoming async/await feature. Not to mention the immature ecosystem, that will still need years of catching up to achieve parity with Go.

Kotlin is also a great language, but has it's own downsides. You need to learn and understand Java (due to the many Java dependencies that you will pull in), there often will be a mismatch between Java libraries and idiomatic Kotlin code, concurrency is harder than in Go, and the JVM is a much more complex beast then the minimally tunable Go runtime.

Go definitely has a place, even though it often wouldn't be my own first choice.


> the ubiquitous `interface{}` pattern

I’m totally biased because I’ve worked on and with go for a decade, including reviewing all sorts of Go we write at Google, but I just don’t get this criticism. It’s just not something I see in the code bases I work on. I admit that I may be working on different kinds of programs to what some others do, and I don’t want to deny anyone their legitimate experiences, but my personal experience has been that this is rarely a thing, if ever.


It can be found in many libraries in the ecosystem, and is common enough in the standard library.

The lack of generics just severely restricts the expressiveness of the type system.

I think the best way to work around this is use of code generation, which is a great tool for internal, controlled environments, but very cumbersome for open source libraries.


I don’t see much, if any, generated Go code at Google. Not to say that it doesn’t happen but it’s not on my radar.



Kubernetes makes heavy use of generated code, as workaround for lack of generics.

They even gave a FOSDEM talk about this, with a very interesting title.

https://fosdem.org/2019/schedule/event/kubernetesclusterfuck...


Aren't http or grpc apis autogenerated? You'd be crazy not to use a generator for those.


I don’t have the experience you do, but I feel like it comes up a lot when working with dictionaries and json.


sync.Map?


I've worked on a million line Go code base and a dozen open source projects and many typical backend services, and interface{} is actually quite rare outside of something like json serialization or string printing - i.e. fmt.Printf(msg string, args ...interface{})

I almost never see code that application developers write using it, and usually if someone is writing that code, it's wrong, and I help them figure out how to avoid it.

Usually you can avoid it by using closures or interfaces.


Just depends on the person. Some people like to copy and paste 50 times, some people use interface{}.


Functions are a thing. Interfaces are a thing. I have literally never, in 6 years of full time go development, just copied and pasted code for the purpose of making it work with a different type. Nor has anyone I have worked with in that time (to my knowledge). I have seen two or three libraries written that use interface{}, but these are by far the exception.


Just depends on your application domain. I never copy/pasted code and could count the number of times I wrote `interface{}` on a single hand.


> the ubiquitous `interface{}` pattern

Ubiquitous where? It's the equivalent of `Any` in Scala (and other similar strongly typed languages with unified type hierarchy) and should be strongly discouraged in production code.


Standard library APIs like context.Context's WithValue/Value, sync's Map/Value/Pool, stuff like the sort or container/heap packages. Some of them are obvious "really wanted generics, but wasn't important enough to get the special built-in treatment" cases, some aren't.

Most logging libraries I've seen follow the pattern from fmt.Println etc to accept arbitrary `interface{}` values to be logged instead of actually using some Stringer or Formatter interface in their actual method signatures.


Context's WithValue will be an interface even if go supported generics. It stores anything, not values of a specified type. Same with fmt


Yeah, that's what I meant with "some aren't", I was mostly listing stuff to support the idea that `interface{}` itself is ubiquitous and not some super-niche edge case that never shows up in real code.


async/await is async io, this is something js is known for since it doesn't actually require parallelism.

Getting concurrency _with_ parallelism right is a lot easier in rust than it is in go. You need to write thread safe code in both of them, but you get zero support from your compiler in go.


Rust prevents data races with the `Send` and `Sync` markers, but that's a long way from making concurrency and parallelism easy.

In Go, as long as you are not sharing data, you don't really have to care about whether something is concurrent or parallel, the runtime takes care of it for you. You just use goroutines, IO transparently happens on a dedicated threadpool, CPU heavy tasks are spread out over the available cores.

As long as you restrict sharing to sending owned data over channels, you are mostly in the clear. Of course sometimes performance concerns etc mandate more traditional tools (atomics, mutexes, etc) and there the compiler won't help you anymore, but I'd argue that is the exception in Go rather than the norm.

You can often solve problems with a actor-system inspired design. (learning Erlang/Elixir will probably make you a better Go developer!)

In Rust, you still have to care a lot about what you do. Things like tokio::fs can handle file system IO for you, but you need to ensure that code you are calling does no synchronous IO. Also, CPU heavy workloads should be offloaded to a dedicated CPU thread pool, since the tokio runtime is not intended for juggling CPU intensive tasks in addition to the async IO. This is fine for Rust, a language that wants to give you insight and control, but it is still much more demanding of the developer.


> You just use goroutines, IO transparently happens on a dedicated threadpool, CPU heavy tasks are spread out over the available cores.

In Rust you can do that too! Just use threads. That's what Go does: it just uses threads.

Async I/O is there if you need better performance than what Rust—or Go—can provide. If you don't need that performance, then don't use async I/O.


Go uses async IO under the hood (for network IO at least). The exposed API is blocking, but the underlying implementation uses async IO and transparently schedules Goroutines across different OS threads.

As you know, Rust doesn't currently have stable support for a similarly high-level interface to async IO. See e.g. the following reddit thread from 3 months ago:

https://www.reddit.com/r/rust/comments/ausuld/what_is_the_cu...


Using async I/O in userspace to implement the sync I/O interface of goroutines is equivalent to what the kernel does to implement sync I/O for normal threads.


Sure, technically there is little difference, but my point was is is much easier to use for the developer. The complexity is hidden from you and the choices are largely already made by the runtime.

Sure, modern Linux handles 10k threads just fine, but where Go shines are middle-level network services or microservices where you can easily have 10+ additional tasks for each incoming request. Using real threads quickly becomes unfeasible, and also more cumbersome (passing around threadpools, which threadpool do I use for this task?, do I really want to do those two things in parallel and eat up an extra thread? the cancellation issue, ....).

Also in Rust you'd need to rewrite a lot of code when switching from sync to async, and still need to delegate things to a threadpool since not every dependency will be async enabled.

When the ecosystem stabilizes Rust will be a great choice for the lower stack libraries and servers, where the performance matters. But for many projects the increased complexity, for moderate performance gains, would be a hard sell.


Do many Go programs really have 100,000 goroutines simultaneously? I'm certain that's quite rare.


Working as part of Juju project, I can tell you for certain that the answer here is yes, very much so.


If you have over 100,000 threads, then you are at the point of scalability where you probably want more than what Go can provide. Async I/O would likely be better for your use case, because even 2kB of stack per thread is 195MB just from thread stacks.


Well, even if you do not do any parallel work for each request in your service, each request in itself is a goroutine. So depending on the load of that service, you might reach such numbers, and more.


Kind of, but with the difference that Goroutines are lighter than kernel threads, so you can run more of them.


Not as much as you'd think. You can run thousands of kernel threads on Linux.


https://golang.org/doc/faq#goroutines

"It is practical to create hundreds of thousands of goroutines in the same address space. If goroutines were just threads, system resources would run out at a much smaller number."

More generally, Go is hardly the first language to implement userspace threads backed by OS threads. Erlang famously does this, for example, and so does GHC: https://en.wikibooks.org/wiki/Haskell/Concurrency

In response to your other comment, the whole point of Goroutines is that you can use them pretty freely without (usually) having to worry about how many there are. Let's say you're writing a server, and that handling one of the requests requires completing 10 tasks, each of which can run in parallel. In Go you can just spawn a Goroutine for each of those tasks. But if you were using real OS threads, then that implementation could easily end up spawning more than a few thousand threads, when the server was under significant load.


That FAQ answer is misleading. Linux supports thousands of threads just fine. If you have hundreds of thousands, you probably want true async I/O instead of having stackful threads at all.

> But if you were using real OS threads, then that implementation could easily end up spawning more than a few thousand threads, when the server was under significant load.

There's no inherent problem with spawning a few thousand threads in Linux.


The FAQ is talking about “hundreds of thousands of threads”. It doesn’t make sense to respond by saying that Linux is fine with “a few thousand threads”. Similarly, I was talking about a scenario where a server would end up spawning more than a few thousand threads.

>If you have hundreds of thousands, you probably want true async I/O instead of having stackful threads at all.

This is a completely baseless assertion, and really makes no sense at all. "Async IO" in itself doesn’t even offer a means of exploiting multiple CPU cores. If "async IO" were all that was required, people would be sticking with Node.js for heavy loads.

>even 2kB of stack per thread is 195MB just from thread stacks.

You say this as if 195MB were a large amount of memory for a server to be using! Obviously, if the memory overhead of running 100,000 Goroutines is only 195MB, then memory limitations are not going to prevent you from running 100,000 Goroutines.


Linux is fine with a few hundred thousand threads too. I didn't mention it because it's significantly suboptimal to run that many OS threads or green threads like goroutines. If your scalability needs are that high, all the other little inefficiencies in Go, such as the lack of compiler optimizations and GC, are likely going to end up dominating more than threading models. We usually talk about high scalability as C10K; C100K is an extreme case.

> "Async IO" in itself doesn’t even offer a means of exploiting multiple CPU cores.

Sure it does. Multithreaded epoll is the standard solution for such things; that's how nginx is multithreaded, for example. Node isn't multithreaded, but that's a Node problem, not an async I/O problem.

> then memory limitations are not going to prevent you from running 100,000 Goroutines.

100,000 Linux kernel threads with 8kB kernel stack + 2kB user stack is only 1GB, which is likewise tractable.

And that 2kB assumes that the goroutine stacks remain small, which is not always the case. They can and will grow based on the dynamic behavior of the program.


You point out yourself that Linux kernel threads use more memory. For this and other reasons (e.g. slower context switches) you cannot spawn as many Linux kernel threads as you can spawn Goroutines.

The rest of your comment is just a repeated insistence that "Go doesn't scale" without a shred of evidence in support. You really seem to have some kind of beef with Go that's clouding your judgment in this area.

>Multithreaded epoll is the standard solution for such things

Multithreaded epoll is exactly what Go provides. Rather than having to manage thread pools yourself, the Go runtime does it for you by distributing logical threads of execution over OS threads.

>We usually talk about high scalability as C10K; C100K is an extreme case.

You seem to be assuming one Goroutine per connection. As I said before, a more plausible scenario for 100,000 goroutines is C10K with 10 goroutines per connection.


Go provides channels for safe communication between goroutines (which are distributed across multiple threads).


You can use channels to pass mutable objects, which are then shared between threads. The passing itself may be safe, but the resulting situation isn't.

The Rust compiler won't let you do that.


You can mutate objects from multiple threads in Go without using channels.


The Rust compiler won't let you do that either!


I know. I just didn't see why you were bringing channels into it. It's not as if it's some loophole involving channels that's the issue here.


People who think Rust is "complex" and "boilerplate-heavy" have never had to deal with C++, or heck, even Java. (Though I'd guess we would actually find plenty of C++/Java code within Google itself.) The only reason one could fairly give for that sort of statement is that Rust - the language itself, not just the ecosystem! - is still quite far from mature. But it's going to have a great future once some of the "tweaks" that are in the pipeline right now reach stability. Async is one of those of course, but there are more.

Edit: And to be clear, I'm not a Go-hater! In fact, Go comes out-of-the-box with a very nice, high-performance concurrent GC that makes it highly appropriate for some scenarios where Rust would not be a good fit at all.


I've written 100k+ loc in go and i've had to use interface{} once. I would like a coalesce operator though.


I would argue that Go's opinionated style is what makes it elegant. It's a programming language for people who are sick of programming languages. It makes clear promises and delivers on exactly those - it belongs in new code for that reason.


>I would argue that Go's opinionated style is what makes it elegant

I'd say Go is opinionatedly inelegant. It is minimal, I'll give you that, but has several non-orthogonal design decisions, smells, etc.


Please elaborate.


This article [1] explains why I'm not a fan.

[1] - https://www.teamten.com/lawrence/writings/why-i-dont-like-go...


Not compile-time checked repetitive error handling (if null != err), duality of panic and error codes, specially blessed constructs (maps, slices, etc being "generic"), lack of expressivity, specially typed map libs, interface{} type safety escape hatch, etc...


If err != nil everywhere is definitely a code smell and poor language design.


No, being opinionated by itself isn't a good enough reason to belong in new code.


If you don't believe that making clear promises and delivering on them is, then what would you suggest is a better reason?


> If you don't believe that making clear promises and delivering on them is, then what would you suggest is a better reason?

How is "being opinionated" and "making clear promises and delivering on them" equivalent?


I didn't say they were. They weren't even in the same sentence in my original comment.


If the promises you are making are not appealing, even if they are clear and you deliver on them, you can't count that as a reason to use a language in new code.


You're arguing with a point that I'm not making at all.


What is your point then? You're being obtuse.


Go has been carefully designed to solve the problems of today. But relying on solving those problems at the language level means I have very low confidence that the language can adapt to solve the problems of tomorrow. They have built-in language-level support for concurrency, great - that's always going to be slightly more user friendly than a library-based solution. But what about when the next wave comes along? Are they going to be able to rework the whole language to incorporate whatever 2020's equivalent to the concurrency revolution is? While retaining compatibility with existing code?

I would not want to use Go for any codebase that's expected to last more than a couple of years. I'd put a lot more faith in languages that were able to adapt elegantly to the challenges of concurrency at the library level, without requiring language changes - that suggests those languages will be able to adapt similarly to the next revolution. And my impression is that languages that started with a solid theoretical underpinning, and/or a small core language with as much as possible pushed out into libraries, handled the transition a lot better than languages with the more ad-hoc, practical-problem-first approach to design that Go has.


I'm with you in general that golang code is very unmaintainable in the long term.

> Go has been carefully designed to solve the problems of today.

I'd even argue it was "carefully designed" to be in the 70's. Other than "goroutines", the language is stuck in that decade. No generics, no expression based syntax, exposing system-dependent "int" type and encouraging its use, badly designed interfaces that can't be used for tagging, badly designed time package, the list is too long to mention here.


My oldest "big" Go project started in 2010. Last commit date was 2013 - https://asciinema.org/a/cJosP51z0oScKPLmVUhyNKUOj

It still runs today. I have Haskell and Python programs that are somewhat smaller from around the same time that no longer run.

More importantly, I can go into my source code and I can still understand it. I can't say the same for the Haskell and Python programs.

I wonder which is more maintainable in the long term.


I didn't mention Haskell or Python. I'm not really familiar with Haskell, and I definitely wouldn't use Python, or any dynamically typed language for any project spanning more than a couple of files.

You can find old C programs that compile and run fine now, so I don't really see the point of your argument.


Smaller in the problems they solve, or just smaller in code? I wouldn't expect Python codebases to last well (no type safety and the language evolves quite aggressively), but I'm surprised you'd have Haskell that now didn't run (assuming you could reproduce the original build/dependency environment - Haskell package management was a mess until very recently) - can you give any details about what kind of thing goes wrong?


There are several factors as to why the Haskell program no longer worked - first is that I'm a crappy Haskell programmer. A lot of the things I thought I was doing ended up being NOT what I want upon reflection many years later (it was a game that was essentially a spreadsheet in disguise). Second, the ecosystem has changed a lot in the intervening years. Thus, nothing worked. I recently rewrote the game just to keep up my Haskell skills because I don't really write much on a day-to-day basis


And yet, people are solving problems in it, often quite well.

I say this as one who has basically mastered Haskell, so it's not like I'm not aware of all these things: Fans of all these complicated abstractions really need to take some time and grapple with that fact. It will improve your programming in the more complicated languages as well.

No, seriously. Stop for a moment and think about that. Consider the advanced features as a scientific prediction: "If you do not use these features, you can't have a good program come out the other end." This prediction is falsified by the concrete existence of a lot of good Go programs. This matters. It is something that should be grappled with intellectually, if you are a language fan.

It will bring you to a deeper understanding of all the fancy features. By a deeper understanding, I mean that quite directly, with no attempt to be subtly saying something like "Oh, you'll find they're all useless in the end." That would not be true. They are not even remotely useless. They are incredibly useful. I would even go so far as to say that the development of these complicated sorts of features is legitimately among some of the most simultaneously interesting and useful intellectual work being done in our time, and greatly contributing to our ability to hold our software world together. It's just that the shape of that usefulness is more complicated than a lot of people realize, and a deeper understanding of that complication is very useful for developers.

One of the things that I think really comes out of it is a yet deeper understanding of matching tools to problems. I use a lot of Go, because I write a lot of things that are at a size and complexity where those other language features just aren't that helpful. Yet, there are tasks I could be assigned where I wouldn't dream of using Go, because I know I would need all that fancy stuff to hold the program together. I can easily imagine ways in which my career trajectory would change in the next couple of years and I'd go primarily Rust; I can equally easily imagine ways in which I will stay primarily Go. I can imagine ways in which I'd need to go with some other language. I think a lot of programmers who find these fancy features and fall in love with them end up overestimating the scope of the set of programs where they can be helpful. To give a degenerate example, my six line shell script does not need to be written in the Rust type system. It is not only not helpful, it would be actively harmful. The scope of programs that don't really need those features and where they quite easily become actively harmful is larger than a lot of people think. Especially if you consider them in the context of a heterogeneous team of developers who may not all be in the 5+ years of experience range, or when you may not be able to guarantee that the project is going to stay in those hands.


> Consider the advanced features as a scientific prediction: "If you do not use these features, you can't have a good program come out the other end." This prediction is falsified by the concrete existence of a lot of good Go programs.

I'd claim that programs built without those features have a ceiling on how complex a problem they can solve before maintainability collapses under its own weight. Go is popular for microservices, which makes a certain amount of sense (if you believe that microservices are an effective way of doing systems architecture - I don't, but I can see why people would). I've yet to see a Go system that I found genuinely impressive/innovative in terms of the functionality it provided.

> To give a degenerate example, my six line shell script does not need to be written in the Rust type system. It is not only not helpful, it would be actively harmful. The scope of programs that don't really need those features and where they quite easily become actively harmful is larger than a lot of people think.

Disagree. In my experience a significant proportion of production incidents happen because a six line shell script gets modified incorrectly, or does not handle an unexpected condition properly.

I've seen plenty of ineffective solving of business problems in e.g. Haskell, which I think boils down to a failure to appreciate how low the reliability requirements actually are in most businesses (exacerbated because many organisations are dishonest with themselves about such things, even in internal communications). I do think that someone who knows the advanced features but is willing to write "YOLO Haskell" (or similar language) will be more effective at solving business problems and creating value than someone using Go or similar (and at this point I've built my career on the effectiveness of fancier languages, even for 6-line-script-like tasks).


"Disagree. In my experience a significant proportion of production incidents happen because a six line shell script gets modified incorrectly, or does not handle an unexpected condition properly."

"My" six-line shell script isn't on production. It's something I wrote to rearrange my mp3 collection, or whatever. Then I deleted it.

My personal threshold for leaving shell is quite low. We may make fun of Perl, but it's really a lot, lot better than shell for any serious task. It's even lower for "production" tasks.

But whipping out the Rust because I want a shell script to increase the brightness of my screen through the /proc file system is massive overkill. I can write the thing in the amount of time it takes you to compile the Rust, and I'm not particularly trying to make fun of Rust here... it's just massive overkill.


I actually ended up writing a Rust program to change screen brightness through /proc myself. Previously, I used a Python script. Understandably, the Rust program is very significantly faster than Python, and I was surprised by the difference it made for interactive use (changing brightness by a scrolling keybind). For comparison, here are two iterations of the Python code, and the Rust code: https://gist.github.com/FreeFull/1e5873f0b13ec291158176db9d7...


> "My" six-line shell script isn't on production. It's something I wrote to rearrange my mp3 collection, or whatever. Then I deleted it.

Which is fine unless it deletes your mp3s, or dumps them all in the same folder, or something. A script inherently has the potential to do a lot of damage, because it can take a lot of actions very quickly. So it's worth being safe even if that means being a little slower.

> But whipping out the Rust because I want a shell script to increase the brightness of my screen through the /proc file system is massive overkill. I can write the thing in the amount of time it takes you to compile the Rust, and I'm not particularly trying to make fun of Rust here...

If your point is that a REPL or shell is a worthwhile feature then I agree with that. But you can have that in a safe language too.


> Consider the advanced features as a scientific prediction: "If you do not use these features, you can't have a good program come out the other end." This prediction is falsified by the concrete existence of a lot of good Go programs. This matters. It is something that should be grappled with intellectually, if you are a language fan.

I didn't mention anything about fancy features such as the likes of what Haskell and gang bring. I'm referring to much more "basic" things like generics, good error handling, non-nullable by default pointers, and so on. Things that one has come to expect in a modern language (which golang isn't in general).

Having the ability to provide abstractions does not mean must use them to write software, but that they are available when required, and when used correctly, they end up simplifying the overall system architecture. I think the world has moved on from writing abstractions for the sake of writing abstractions, and I agree that it's not the way to go if such code is still being written. But the solution is not to get rid of the ability completely, resulting in a different sort of mess. At least we have tools that make navigating large code bases easier.

I feel it's actually golang people end up reinventing things differently just for the sake of writing things differently. Take the builder pattern for instance. For some reason, golang code I see uses a different, more convoluted pattern of passing an options struct that gets returned in a closure to emulate that. It's a bigger mess to understand, and probably ends up generating more work for the GC since closures keep getting allocated.


I was also solving problems in Z80 Assembly, once upon a time.


The sort of deeper understanding I'm talking about will actually integrate that as a point in a more comprehensive picture, rather than thinking of it as some sort exceptional case. To the extent that you have a model in your head where that point doesn't fit (and, I assume, believe that would somehow challenge the model in my head), I'd say that is also a point to consider and ponder and try to integrate, rather than having it sit outside of the model that only fits a fraction of the code out there, and has places where it actively mismatches existing code bases.


I would argue that the select is the more important feature. Goroutines are just lightweight threads, and go isn't the first language to have them.

No generics suck, unfortunately. I don't care that most things aren't expressions, and tbh, I don't know a lot of people that do. System-depdendent int is not a problem for me, but I recognize that it is a problem in some fields. I like how interfaces are designed and used in the language, so I don't share your opinion there. For the time package, the only thing that's badly designed is the American-centric parsing system. Everything else is, for me, ahead of the various package I've used in other languages.


The fact that duration is basically an int wrapper around nanoseconds, and converting between different units is very awkward are just the beginning. Take a look at Joda time (JVM, which served as the basis of what later was integrated into the standard lib) or Noda time (.NET) to see examples of properly designed time libraries. It's no wonder that golang's badly designed time library caused an outage at cloudflare.


The codebase of Go itself (which is in Go of course) has been there for multiple "couple of years" already.


> Are they going to be able to rework the whole language to incorporate whatever 2020's equivalent to the concurrency revolution is?

This is why i'm sad that Rust has baked async/await into the language. Somewhere in the RFCs, it is stated that "async/await is now known to be the right way to do concurrency" or something like that, but to me it is clear that it is at least a decade too soon to say that.


I would be curious if you could find a link for that quote, because Rust has always been of the mind that there is no one correct approach for concurrency, and has good support for channels, threads, work-stealing task pools, etc., all intended to serve different use cases. If there is a line like that somewhere in the RFCs, I would expect it to be in the context of "the design of async/await proposed herein is now known to be the right way to do asynchronous I/O in Rust, given Rust's other design choices and constraints [in contrast to the years of other experiments have been tried, including Go-style green threads]". In particular the long term goal is not to make async/await the end-all be-all, but rather to expose a generalized concept of generators which async/await is then built upon.


It turns out to be hard to do concurrency in a library, if you need both absolute correctness (e.g. works with all system calls) and scale (more threads than the OS native threading will allow).

Afaik only Go and Erlang currently meet (at least my) criteria. No JVM solution currently does.


You have solutions like Akka, Vertx, and Reactor/RxJava on the JVM. Not to mention that the JVM is getting a green thread (fiber) implementation by means of project Loom.


"a programming language for people who are sick of programming languages" is nonsense.


Is it nonsense? Makes perfect sense to me. There are programmers who don't get excited about languages as a topic and would rather just get on with it. "People who are sick of programming languages" seems like an apt description for that crowd.

I personally dislike Go, precisely for many of the reasons why those "people who are sick of programming languages" would like it!


I like the phrase, too. I'm on the "sick of programming languages" side, and I like go, for the things I'm using it for: CLIs, simple debug web-servers, calling out to RPC services. (Read a CSV file, run everything through translate / grammar parsing / etc.)

I'm in that age of folks that did pascal in high school, C and Scheme in college, C++ and python in grad school while futzing around with ocaml and common lisp and sather and other fun things. Programming languages are fun.

But, oh, I have so much other stuff to pay attention to. I write C++ a few days a week, which is barely enough to keep move semantics in my head. I also write SQL and analysis docs and internal DSLs and configuration files and python.

At this point, C++ is barely earning its keep, in terms of mental complexity for reward. Go feels a lot nicer, since I can just get in, do my thing, get out, and it'll at least work and give me a compile error if one of the APIs change when I don't look at it for a year.

I'm not about to jump into rust.

So, at least for me, go hits a pretty good sweet spot.

[Edit: I'm not trying to point fingers at rust in particular. It looks fun. If I weren't still working on getting my hobbyist-Haskell better, I'd probably be playing with it. But that would be at home, for fun, not where I have to have the thing in my head for production issues.

It reminds me a bit of the old statement about perl, that it was a fantastic language if you used it every day, but not if you don't. And I don't get to use these things every day any more.]


Please give me this nonsense. I'm sick of programming languages.


Protip: You can write FORTRAN in any language!


Of course, you can (re)write code in any Turing-complete language in any other Turing-complete language.


Why?


Look at Java. Look at Spring and look at all the other junk out there. Every corner you go around you see smart people trying to outsmart each other rather than solve problems. Go makes that harder.


Spring and Java aren't junk.

I like Go for some things, but when using it I miss many things from Java:

* I can connect to the running JVM and change log levels, and other parameters live.

* I can debug a Java file that has compilation errors in it and fix them as I go in some cases. (In Go you can't even compile a file with an unused import)

* Java libraries and frameworks are mature. Some have been around for decades with all the testing and refinement that implies.

For some applications, like a REST server connecting to a database, why would I use Go and use all it's brand new, limited libraries when I could use dropwizard instead and get an easy to deploy JAR file?

Sure Java has some esoteric features, but you don't have to use them.

If you can't keep your team from being clever in Java, I doubt you'll be able to do so just because they use Go.

I've seen many Go libraries attempt to use introspection and backtick 'annotations' to very confusing and buggy effect.



Spring boot is fantastic if you havent checked it out.


Spring Boot is precisely the magical nonsense the GP was complaining about.


I like Java and Spring, but I don't use Spring boot.

Note that just because the recent tutorials out there mention Spring Boot, doesn't mean you can't wire together a Spring app either through config files or programatically.


In my opinion, its not any more magical than any other framework.

What it is though is something that solves problems and has a great ecosystem, instead of reinventing the wheel.


Citation needed.

From what I understand there is no Rust being used internally at Google (apart from perhaps individual engineers tinkering with it just for fun).

There is no support for it from the build tools, so anything more than just messing about on your own machine won't work at Google: no libraries, no way to compile it, no way to test it, no way to deploy it, and no way to monitor it.


I do not know about internally, fuschia apparently has major parts written in Rust.


And Chromium's crosvm. Rust is a perfect fit for that particular task (writing a VMM).

I would be surprised if there was widespread usage of it, though. Rust chose a whole different set of tradeoffs than Go.


I would agree that Go is rarely the best choice for particular problem, but I've found it to be a good choice for "enterprise development" for two reasons. It's easily reviewable by others with varying levels of skill and familiarity with the project, and it doesn't have a complicated deployment story.

At work, I've seen Go used primarily in projects that would have previously been written in Python or Ruby.


My biggest issue with navigating Go codebases is that sometimes it's cumbersome to find where a specific type/function is defined, when there are multiple files under the same directory.


A simple text search will find it every time. Since Go code should always be formatted, it's pretty trivial to search for `type HTTPServer` or `func NewServer(` ... even over fairly large codebases this will give few enough hits that you can just search everything most of the time.


Not to mention trying to figure out which types implement a given interface. You'll get completely unrelated types you don't care about a lot of the time.


Would this not be solvable with something like ctags or godef?

(I'm a newbie with Go so I've probably missed something in the problem you're describing).


I've had this issue as well and found that it makes using an IDE almost mandatory for large code bases.


I started working with Kotlin recently, it's bad compare to Go goroutine model, the biggest problem is actually knowing if a function is sync or async, especially when you use other Java lib.

The fact that you need to create your own threads pool to run those koroutine, really it's inferior to the Go runtime that does everything for you ( multiplexing goroutines onto os threads, creating new thread if blocking ect .. ), in Go it just works and it makes sense, in Kotlin you need to do many many things manually and it's much more complex for no apparent reasons.


It's more complex because you have more control. If the Go scheduler does something pathological, you're going to be digging a hell of a lot deeper than you need to in Kotlin.

You can just use a shared executor with a growing pool if you don't care.


Java is getting a fiber (green thread) implementation by means of project Loom. I recommend you check it out. In the mean time, you have other solutions like Vertx, Akka, and Reactor/RxJava.


> I just don't think Go has a place in new code

This makes a lot of sense. I've seen and worked on code at a large US tech company that is mainly reliant on golang for its services, and let me tell you, it's hideous. It's very verbose, and they end up reinventing everything you have in Java land anyway in a much more verbose and error prone way. You also end up losing on the tremendous monitoring and management capability of the JVM, as well as the extremely mature and diverse ecosystem, with integrations into basically anything you can think of. You want to integrate into an "enterprise" system or infrastructure? ew who'd want to do that? we use golang because it's hip, who cares about about all that money making business?

The funny thing is, there's still no getting away from Java/JVM, and they have a good number of services running on the JVM (particularly security related, and data processing pipeline services).

While the language is "simple" (i.e. underpowered) enough that you can read it, it immediately translates into verbosity and complexity in the code base, with things like helper functions that are basically "map/filter/etc" calls in any sane language taking up 8-10 lines of code scattered everywhere, as opposed to them taking a single line of code as they're supposed to. Not to mention things like JSON handling, validation (non-existent way to do them in a non-error-prone way, unlike annotations or attributes in JVM/.NET), no way of doing compile time DI (unlike e.g. Dagger or micronaut on the JVM), the list goes on.

Java the language is improving with things like switch expressions, records, and value types being added. The JVM itself is getting fibers, and better native integration for high performance work. GraalVM enables compilation to native binaries if needed.

You also have languages like Kotlin to use on the JVM if you don't fancy Java itself.

The only reason golang is being pushed around is hype. It's so underpowered that using it for any moderately complex domain becomes a mess. Even generics if they decide to add them won't fix that.


I agree with almost everything you said. However, there is one point where I think you're wrong: the only advantage Go still has over java (as it exists today) is the size of the runtime - you can get a Go container serving an HTTP server in ~10MBs of size and similar amounts of RAM - this is still a far cry from what I've seen Java achieve.

This is not important in many use cases, but if you are planning to run a microservices architecture, this can help a lot with horizontal scaling costs.


You might want to check out: https://quarkus.io/ or https://micronaut.io/ for that :)


yes, these are very promising projects, but they are just starting out this year unfortunately. I hope they will prove to be stable and close enough to the OpenJDK as time goes, but I wouldn't bet a new product on them just yet, unfortunately.


> I just don't think Go has a place in new code.

This is a completely blanket statement. Not only are there very large open source projects written in go which your company more than likely already uses (kubernetes, docker, prometheus, influxdb to name a few) there are also case studies of very respectable performance in go:

1 million requests/min http://marcio.io/2015/07/handling-1-million-requests-per-min...

1 million websockets in go: https://www.freecodecamp.org/news/million-websockets-and-go-...

event loops in go: (surpasses redis in their benchmarks) https://github.com/tidwall/evio

I see golang more as a progression for JS or Python developers than Java or JVM based languages, as the concurrency model in golang is closer to to pythons + javascript async. The argument that Java, Kotlin, Erlang developers are less likely to move to Go is probably a valid one, but it is completely baseless to say that "go has no place in new code".

The concurrency model in golang with channels makes more sense to someone more familiar with python/javascript who hasn't been exposed to threading or other concurrency models. If you can understand basic coroutines/channels you get the benefits of golangs concurrency (and even parallelism) without much effort, or code complexity, with an order of magnitude more performance that python or javascript for the same feature set.

The profiling tools in golang are also first class (pprof, trace, ect) With pprof, its easy to see even traces of different coroutines, heap allocations, cumulative allocations, these are even safe to sample on production running services and this is provided by the standard library. There are even built in tools to profile GC. The argument that tooling and monitoring doesn't exist in golang is simply not true.


Rust is a programming language for people who like programming languages. Every feature under the sun. Seems like the new Scala in that regard. I'd definitely use it if I was building a web browser or something else where C++ was previously the only option, but I personally don't want Scala 2.0 to build web servers or cli apps. I would rather focus on what I'm building, which is where Go excels.


> Every feature under the sun.

Have you gotten this impression from actually using Rust, or merely from people who themselves have not used Rust? It deliberately omits a ton of ideas from other programming languages (exceptions, classes, data inheritance, monads, and more), and the features it does have are intended to compose well and without a huge amount of mental burden.


Yep, it leaves out ideas that are no longer en vogue but has every idea that is currently en vogue. So in 20 years when those ideas are themselves thrown out, it will feel like a language of the 2010s.


Rust and Scala are at nearly opposite ends of the spectrum.

Rust forces you to be explicit about nearly everything, Scala is all about allowing you to hide the details in lots of different ways.


I focus on what I am building with JVM and .NET stacks, while enjoying quite expressive languages.


> all concurrency systems converge until they become an Erlang actor model

That's absolute gold, and I've seen a very accurate real life representation of that in recent times. Uncanny how many of those wheels we keep re-inventing.


Well, Erlang-like actor model is one basic pattern for achieving safe concurrency. A second one is keeping data read-only so that it can be freely accessed by multiple threads; and a further pattern is serializing access at run time (either using explicit locks, or in special cases with "non-blocking" algorithms and data structures). Rust is pretty unique in being able to comfortably encompass all of these patterns in its concurrency model, while ensuring safety.


> A second one is keeping data read-only so that it can be freely accessed by multiple threads

That's not really a "pattern for achieving safe concurrency". At one point, your concurrent processes need to communicate somehow or they're parallel more than concurrent.


What is your criticism of Go in all of this? That someone at Google prefers Rust? Or that you personally like Kotlin because of "the power of the JVM"?

Getting away from the JVM is one of the reason FOR using Go rather than Kotlin/Scala/Java etc.


This is an extraordinarily unproductive way to start a comment and root a thread.


  $ curl -s 'https://news.ycombinator.com/item?id=20211301' | egrep -o comhead | wc
       99      99     792
Seems to have been pretty productive to me.


Why shouldn’t Go have a place i new code? You talk about Rust, but why does that make Go a bad choice? Is it technical reasons or politics?


In my 30+ years of experience writing software in a wide variety of paradigms (never mind imperative languages) I've found most of the arguments for and against Go to amount to little more than tribalism and snobbery[1]

There is no such thing as perfect language and clearly some languages are designed to be better at some scenarios than some other languages...and visa versa. The problem with Go is that it's designed to be a boring language where it's aims are more around language simplicity and API stability. This makes it very easy to play the "death by a thousand paper cuts" game with Go. If you'd believe what you read on HN, nobody would be using it when actually it is a very popular language.

[1] I'm sure that comment will annoy a great many on here but that's my honest observation.


This is exactly my experience.

I've used C (closer to Go than Rust) and I've used C++ (closer to Rust than Go) in previous jobs. My experience is that the C programmers tend to focus on the problem at hand. On C++ teams, people spend a lot of time going to C++ conferences and discussing the idiomatic approach to mundane tasks in their superior language. I imagine we'll see the same thing happen in Go and Rust... different personalities are drawn to the different languages.

I'd rather be going to security, networking and crypto conferences instead of wasting a lot of time on a particular language.


Language-specific conferences are not just a C++ thing, though. There are Java conferences, Ruby conferences, Python conferences, Rust conferences, etc.

The only thing that doesn't exist is a C conference, but that's because C is the least common denominator.

Pretty sure that there are Go conferences.


There are. GopherCons are a thing. Also, here's an ad for the upcoming GopherConAU : https://gophercon.com.au.

CFPs are open until end of June. https://papercall.io/gophercon-au-2019


> I've found most of the arguments for and against Go to amount to little more than tribalism and snobbery[1]

You can say that about any technical option that isn't garbage. That observation doesn't help us learn about Go, and doesn't let us dismiss all the other arguments.


The answer to their question (to paraphrase: "why are opinions so polarised on Go?") isn't a technical one. It literally is just people arguing personal preference.

If he or she wants a technical arguments for why they should or shouldn't use Go (which is a subtly different question but results in a significantly different answer) then there is already a wealth of debate on that topic already so there's no new technical perspectives that I - nor anybody else - could actually add.


Absolutely spot on.


It's taking on a fair amount of risk to use a language that (a) encourages the use of concurrency but (b) is not memory-safe when you do so, especially when writing network-facing code.

Go uses multi-word primitive values, which means assignments are not atomic. Consequently, racing assignment statements can leave objects in weird states that violate type and gc invariants. For example, racing assignments to a value of interface type can give you an object with the wrong dictionary for its data.

The argument for not worrying about this is that this is an unlikely scenario. It's reasonable to think in probabilistic terms when clients are not trying to break the software (eg, in games or office software). However, attackers are not probabilistic: they actively search out vulnerabilities and push on them, so potential vulnerabilities turn into actual vulnerabilities in predictable fashion.


Go definitely doesn't encourage code where multiple goroutines directly modify the same values. You're supposed to use channels for communication between goroutines by default.

Rust also lets you write memory unsafe multithreaded code, if you want to.


> Rust also lets you write memory unsafe multithreaded code, if you want to.

That's a false equivalence. Nobody is criticizing Go for having an `unsafe` package. The criticism is that your Go code can be undefined without using the `unsafe` package. In Rust, you need to use `unsafe` in order to write code that has data races. Otherwise, data races are statically prevented in safe code, unlike in Go.


You don’t need to use 'unsafe' yourself in Rust. Any package that you’re using might contain unsafe code that its author believes (rightly or wrongly) to be thread-safe.

The situation with Go is pretty clear. Channels are safe. If you start mutating shared state across different goroutines, then you’re in the same situation you’d be in if you were doing the same thing in C++. Anyone who doesn't understand why that's potentially unsafe wouldn't be ready for Rust in any case.


> You don’t need to use ‘unsafe’ yourself in Rust. Any package that you’re using might contain unsafe code that its author believes (rightly or wrongly) to be thread-safe.

Sigh. That's true of Go too. Both Rust and Go have `unsafe` labels that give access to dangerous things that can cause UB. Both Rust and Go are susceptible to UB-at-a-distance if library code internally uses `unsafe` incorrectly. The difference is that, in Go, you can write code with data races without using any `unsafe` at all. Nothing will yell at you (unless you exercise the race in a test and run the race detector). Rust does not suffer from this flaw.

I've been writing both Rust and Go since before each had their 1.0 release, and I like both languages. This isn't some back-handed comment about how I think Go is bad because of this. I rarely trip over data race related bugs in Go, but they do happen, so they aren't a huge issue in practice. (And your continued mention of channels is a red herring. Go has mutexes and idiomatic code uses mutexes quite frequently. Channels are not always the right thing to use.) Nevertheless, the comments here comparing Rust and Go are a fairly important mischaracterization of how each language treats memory safety, and it's important therefore to correct that.


>Sigh. That's true of Go too.

Yes I know. I'm not sure why you think I'm saying that this is not true of Go.

>The difference is that, in Go, you can write code with data races without using any `unsafe` at all

Again, I know. I addressed this in my previous comment.

Less sighing, more reading!


Yes, I'm sure you're aware. But the way your comments are written are pretty misleading, so I'm trying to clarify them. If you don't think the clarifications are beneficial to you, then treat them as clarifications for anyone else who is reading.

> Anyone who doesn't understand why that's potentially unsafe wouldn't be ready for Rust in any case.

That's most certainly not true. They might not be ready to write high performance generic data structures from scratch, but application level code in Rust generally has about as much explicit `unsafe` as application level Go code, in my experience. (And writing high performance generic data structures isn't really a thing in Go. Not nearly as much as in Rust anyway.)


I don’t think you are clarifying them so much as misreading them.

I'm confused by the second paragraph of your response. My point was that the lack of an explicit 'unsafe' block in Go isn’t that much of an issue, as pretty much everyone knows that Go does not make any memory safety guarantee when accessing shared state from different Goroutines. Anyone who doesn’t know this isn’t ready to get their head around writing multithreaded code in Rust. Rust’s features for doing this are cool, but they require a pretty good understanding of Rust’s complex type and lifetime system.


What a strange argument. "Pretty much everyone knows" that C/C++ aren't memory safe even for sequential code, and that it's incredibly easy to trigger thread-safety bugs in C/C++. But since "pretty much everyone knows", I suppose that makes it OK?

> Anyone who doesn’t know this isn’t ready to get their head around writing multithreaded code in Rust.

Well, if they aren't using `unsafe` in the process (or, to be fair, relying on buggy library code that uses `unsafe` inappropriately), they're not going to introduce memory unsafety via data races - no matter how "ready" they are. You just can't say the same about C/C++. Or even Go.


I don’t recommend that people use Go to write complex multithreaded code that accesses shared state. My point is that I doubt many people are doing this while erroneously thinking that their code is guaranteed to be memory safe. In other words, whatever the problems with Go may be in this regard, it’s not primarily the absence of an ‘unsafe’ keyword that’s the issue.


> I don’t recommend that people use Go to write complex multithreaded code that accesses shared state. My point is that I doubt many people are doing this while erroneously thinking that their code is guaranteed to be memory safe. In other words, whatever the problems with Go may be in this regard, it’s not primarily the absence of an ‘unsafe’ keyword that’s the issue.

I think the point here is that if the word "unsafe" exists as a language construct, someone is not wrong in assuming anything without that mark is safe. Otherwise, why isn't there a "safe" language construct?


That’s true to an extent, but it’s important to bear in mind that you can’t lexically scope memory unsafety. Rust code that doesn’t contain an 'unsafe' block should be safe assuming that all of the libraries it’s built on are safe. But those libraries may well contain 'unsafe' blocks that are (one hopes) carefully written to be memory safe.


Admittedly, my understanding of the problems you describe is superficial at best. But it sounds like you are citing a rather narrow set of requirements (memory safety in network code where protection against exploitation is paramount) as a means to generalize that Go is overall not a good language and should be avoided in new code. I don't think that's fair. As with everything, it may be not suitable for the scope you outlined, but it evidently has its place in a multitude of other uses.


“Memory safe” has a specific technical definition, and “atomic assignment” isn’t it.


No, the GP is correct. Data races are undefined behavior, and break memory safety: https://research.swtch.com/gorace


Is there a sense in which Russ's claim that "data races are not security problems" is true? It seems to be assuming that data races are impossible to exploit by users (data input), only by programmers.


Presumably because data races are so inconsistently triggered that creating a reliable exploit would be generally infeasible, even with control over user input. However, this unpredictability is also why data races (among other concurrency issues) are some of the most maddening types of bugs to observe, hunt down, and diagnose, which is why eliminating data races is actually a big deal for anyone who's ever wrestled with threads before.


Go is indeed memory-safe for sequential code, but non-atomic assignment can definitely break memory safety in multi-threaded settings - it's effectively a data race.


So LFE will reign http://lfe.io/

LFeO even ;)


I was curious about learning LFE, but didn't see if it was able to interface with Phoenix (Elixir's web server). Is there any way to use Phoenix from LFE?


> I just don't think Go has a place in new code.

This is a confusing statement, to me. This implies that go has a place in old code. Or are you saying that Go should not be used at all, because any future use is "new code"?


Rewriting existing, working programs in a different language very rarely makes sense. Avoiding a certain language for greenfield programs is much easier.


> And for those who have never used Erlang, learn it once, write something highly concurrent in it, and you'll learn to distaste almost every other language's faulty concurrency idioms almost immediately.

Does learning Elixir do the same?


I'm really curious now as to what Google flavoured Java looks like.


They taste like guava


What's "Google-flavored Java"?


Check out the Google Java Style Guide:

https://google.github.io/styleguide/javaguide.html


Guice, Guice and more Guice.

Some teams switch to Go because they are fed up of writing Google Java instead of real Java.


I'm still super annoyed that the solution to "dynamically resolving dependencies is a PITA" became "just make a monolithic app and pray for no external dependencies/conflicts".

Windows figured this out ages ago. You ship DLLs with your app, you keep them in the same directory, and run your app wherever, but update it incrementally. Now your user doesn't need to download a 120MB file every time there is a one-line change.

Wrapping this pattern in a ZIP file that can be executed directly (another Windows-ism) basically gives you everything you want from a static app, but adds the ability to inspect and patch components. Why does nobody use this method?


You can send out binary patches / deltas though?


The number of Go-haters (for lack of a better term) in here is astounding, to me.

If you don't want to see Go in new code, then don't use Go in new code. There's nothing wrong with this point of view, by the way, (even though it confuses me a bit.)

If you won't use Go because of its current lack of generics, that's a perfectly valid point! Don't use Go. I don't mind AT ALL.

If you don't like Go for whatever reason, then don't use it. Nothing at all wrong with that position.

What I am astounded by is the apparent notion in here that "others need to be made aware my position and brought into agreement with me" - not quoting anyone, just trying to cite a feeling MANY of the comments here are giving me.

Maybe it's the HN culture to do this, or the culture of comment sections in general, I don't know.


I think it’s just that people have strong opinions about the tools they use, and that network effects in programming make people fear that other people’s choices will be foisted upon them.

Programmers have a strong desire to make things as ideally correct as possible, and that extends to technology choices (perhaps such choices are foundational) and so it can be an affront to one’s sensibilities to see the “wrong” choices being made.

But ultimately we all have different value systems and experiences. What works for us, and more importantly what appears important to us, may be different to others. But that is okay.

My take is that there’s a lot of space for experimentation with different ideas in languages and general approaches to building software. It’s still a very young field. We should keep pursuing lots of different ideas, old and new.


> network effects in programming make people fear that other people’s choices will be foisted upon them.

The hype is the main problem with Go.

It's a practical language but it's mediocre and in no way pushing the boundaries of language theory. Yet it's being sold as a huge leap forward in computer science.

People who are forced to use it by network effect resent the decision is hype-based and not evidence driven.


> it's being sold as a huge leap forward in computer science.

I don't think anyone is selling Go as a huge leap forwards in computer science. It is a huge leap forwards in software engineering though.


It always seemed to be a leap forward in practicality to me.

I need to read a queue, process something, and send a message of the results somewhere else. To do that I need a package for that specific queue, maybe something I'm sending to unless it is plain json, and the standard library. Installation is scp and a HUP.

Where I'm using it, I don't need language theory, I just need to get my work done so I can move on to something interesting.


I did a lot of selling of Go, for years, and my position was that it’s about building an ecosystem that’s a solid basis for software engineering. Not a quantum leap, just sound proven ideas that yield predictable and reliable results.


You use the past tense here. Why did you stop selling?


It just seems like identity politics for programmers.

What I mean by that is that some people think that their own thoughts, feelings and experiences with regard to a language are universally true, and that anyone who disagrees with them is defective in some way.


I’m pretty sure the people that say they won’t use go for new code won’t use go for new code. Not sure what your point is.

Go (like Rust) is not particularly popular ourside the wider HN/SV community.


This is just not true. 2.5 years ago when I was looking for a remote-only go job, I had so many positions to look at, I had to put them in a spreadsheet. None of them were in silicon valley .

Many many places are using go here and there. From start ups to big enterprises.

It's rarely all they do, but it's pretty common to have some backend in go.


I mean, sure, Go is also used elsewhere but it's at least at the moment still tiny in comparison with other languages.


Other than my own experience, I've seen some data that led me to believe that Go is more pervasive than I previously perceived:

    Go started out with a share of 8% in 2017 and now it has reached 18%.
    In addition, the biggest number of developers (13%) chose
    Go as a language they would like to adopt or migrate to.
https://www.jetbrains.com/lp/devecosystem-2019/

Regardless of variance, I wouldn't call 18% tiny. Specially given the trend:

https://jaxenter.com/go-number-one-for-2019-hackerrank-repor...


The same survey claims that Go is more popular than C. I would be interested to know how well the Jetbrains user base represents the overall developer community.


> The same survey claims that Go is more popular than C

None of the two surveys claim that. Feel free to point out where it does.


From the jetbrains link you posted. 18% used Go last year, 17% used C.


Ok so your definition of popular is usage in the last 12 months in this case. And by 1%.

C is a niche low level language. Doesn't surprise me that a higher level language is used more often.


What's SV?


I guess it is Silicon Valley.


Sillicon Valley ... ?


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: