Hacker Newsnew | comments | show | ask | jobs | submit login

As far as I know, by definition (coming from someone with a predominately Java background) concurrency requires multi-core processors, or, for example, a distributed application with at least 2 nodes that communicate with the same central server / "hub" (which would still require a multi-core processor, as far as I know, on the server). The central tenet mentioned with concurrency is often that of "race conditions" where it cannot be predicted which thread or node will access a resource first. If a task isn't executed in parallel, such as it has to wait for the other to finish before it starts / proceeds, I don't see how it could cause a race condition. Admittedly though, I don't have deep knowledge of how high level code translates to actual CPU instructions so it could be possible, or even likely, that if the processor is switching between tasks in such a way that each line of code, so to speak, that is run is from a different method that concurrency issues would occur even on a single-core processor. A language like Java has robust and mature aspects of the language to deal with this so I would be wary of using something like node.Js if this wasn't well documented. Concurrency issues are nasty and I have seen firsthand the mayhem they can cause in legacy applications I've worked on.



Concurrency doesn't require multi-core. Parallelism requires multi-core.

-----




Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: