It seems like it would be better if the concurrency were pluggable somehow. Maybe Crawl takes some kind of worker-starting interface, with a suitable default implementation?
Then the job of the crawler is to find new units of work, not to schedule them. In theory it could be done single-threaded by pulling work from a queue.
The Go scheduler is already taking units of work called goroutines and scheduling them. It's no big deal to ask the crawling system to have some limit on how many goroutines it'll use, the patterns for that are well-established, and also necessary because it's not all about the goroutines in this case. Crawling needs controls to limit how many requests/sec it makes to a given server, how deeply to recurse, what kind of recursion, etc. anyhow so it's not like it particularly sticks out to also have a concurrency parameter.
Fair enough. But I'm not sure a wrapper around starting a goroutine counts as an inner platform, because it's not doing much work, and it's not really work that the Go SDK does. Choosing when to start goroutines and how many to start is an application concern.
Depending on how it's done, it might be a decent way to structure a crawler?
Then the job of the crawler is to find new units of work, not to schedule them. In theory it could be done single-threaded by pulling work from a queue.