Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kind of, but with the difference that Goroutines are lighter than kernel threads, so you can run more of them.


Not as much as you'd think. You can run thousands of kernel threads on Linux.


https://golang.org/doc/faq#goroutines

"It is practical to create hundreds of thousands of goroutines in the same address space. If goroutines were just threads, system resources would run out at a much smaller number."

More generally, Go is hardly the first language to implement userspace threads backed by OS threads. Erlang famously does this, for example, and so does GHC: https://en.wikibooks.org/wiki/Haskell/Concurrency

In response to your other comment, the whole point of Goroutines is that you can use them pretty freely without (usually) having to worry about how many there are. Let's say you're writing a server, and that handling one of the requests requires completing 10 tasks, each of which can run in parallel. In Go you can just spawn a Goroutine for each of those tasks. But if you were using real OS threads, then that implementation could easily end up spawning more than a few thousand threads, when the server was under significant load.


That FAQ answer is misleading. Linux supports thousands of threads just fine. If you have hundreds of thousands, you probably want true async I/O instead of having stackful threads at all.

> But if you were using real OS threads, then that implementation could easily end up spawning more than a few thousand threads, when the server was under significant load.

There's no inherent problem with spawning a few thousand threads in Linux.


The FAQ is talking about “hundreds of thousands of threads”. It doesn’t make sense to respond by saying that Linux is fine with “a few thousand threads”. Similarly, I was talking about a scenario where a server would end up spawning more than a few thousand threads.

>If you have hundreds of thousands, you probably want true async I/O instead of having stackful threads at all.

This is a completely baseless assertion, and really makes no sense at all. "Async IO" in itself doesn’t even offer a means of exploiting multiple CPU cores. If "async IO" were all that was required, people would be sticking with Node.js for heavy loads.

>even 2kB of stack per thread is 195MB just from thread stacks.

You say this as if 195MB were a large amount of memory for a server to be using! Obviously, if the memory overhead of running 100,000 Goroutines is only 195MB, then memory limitations are not going to prevent you from running 100,000 Goroutines.


Linux is fine with a few hundred thousand threads too. I didn't mention it because it's significantly suboptimal to run that many OS threads or green threads like goroutines. If your scalability needs are that high, all the other little inefficiencies in Go, such as the lack of compiler optimizations and GC, are likely going to end up dominating more than threading models. We usually talk about high scalability as C10K; C100K is an extreme case.

> "Async IO" in itself doesn’t even offer a means of exploiting multiple CPU cores.

Sure it does. Multithreaded epoll is the standard solution for such things; that's how nginx is multithreaded, for example. Node isn't multithreaded, but that's a Node problem, not an async I/O problem.

> then memory limitations are not going to prevent you from running 100,000 Goroutines.

100,000 Linux kernel threads with 8kB kernel stack + 2kB user stack is only 1GB, which is likewise tractable.

And that 2kB assumes that the goroutine stacks remain small, which is not always the case. They can and will grow based on the dynamic behavior of the program.


You point out yourself that Linux kernel threads use more memory. For this and other reasons (e.g. slower context switches) you cannot spawn as many Linux kernel threads as you can spawn Goroutines.

The rest of your comment is just a repeated insistence that "Go doesn't scale" without a shred of evidence in support. You really seem to have some kind of beef with Go that's clouding your judgment in this area.

>Multithreaded epoll is the standard solution for such things

Multithreaded epoll is exactly what Go provides. Rather than having to manage thread pools yourself, the Go runtime does it for you by distributing logical threads of execution over OS threads.

>We usually talk about high scalability as C10K; C100K is an extreme case.

You seem to be assuming one Goroutine per connection. As I said before, a more plausible scenario for 100,000 goroutines is C10K with 10 goroutines per connection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: