Hacker News new | comments | show | ask | jobs | submit login

Sorry, but is this golang concurrent networking pattern correct:

    func main() {
        ch := make(chan string)
        for i := 0; i < MAX; i++ {
            go fetchCert(ch)
        }
        scanner := bufio.NewScanner(os.Stdin)
        for Scanner.Scan() {
            line := scanner.Text()
            ch <- line
        }
    }

All goroutines receive on the same channel! Instead a new goroutine should be launched for each net conn. One should be able to spawn 1000s (or 1Ms) of conns and avoid ulimits using buffered chans, waitgroups, timeouts, or counters...



This pattern is correct (but has a flaw). It is a simple worker pool. The first available worker will grab the first piece of work from the channel and process it.

If you set MAX to 1000, you will have 1000 workers — and simultaneous connections.

The flaw is that when the last piece of work gets taken from the channel, the program will end, thus the last pieces of work that at the time are being processed, will get canceled. You could mitigate this by using a second channel, that the workers will access at the end of their work, thus ensuring that it will close only when the last worker finishes its work.


The in-article version has a time.Sleep(2 * time.Second) after the scan loop. Not exactly reliable (waitgroups or channel signaling would be better) but better than nothing.


As you can see from almost all commands / snippets in the article, I took the pragmatic approach for this project ;)


Well, that whole file can be improved on (for instance: analyzeDomains and analyzeDomain can be combined into one; the range/close operators can be used on a channel), but the pattern itself (spawning a certain amount of workers instead of one for each job) is decent for certain cases. For this case, you may be right (I haven't tested it), since a) not much data is transferred, and b) He was running this on an AWS instance instead of a low-end machine.

However, at a certain point, you may encounter bandwidth issues, timeouts, and the like due to local network congestion; that pattern has its uses there. I've tried writing a downloader that downloads every file it's given at once, and it went about as well as one would expect.


Thanks for the replies! Indeed, Multiple goroutines can receive on a single global channel to create a simple worker pool in a fan-out configuration. Analogously a second fan-in channel can be used to merge the parallel computation. With the caveat that the channel should be closed properly to make sure all tasks are complete.

Inspired now to start a "go-saturate" library for measuring max net capacity...




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: