Hacker News new | past | comments | ask | show | jobs | submit login

I'm not so sure that multicore programming will be the future.

In the last couple of years, we have seen the rise of the netbook, the smartphone, and languages like Python and Ruby. Each of these are, apparently, "fast enough" to be highly succesful.

Applications are also changing: casual games, Farmville and even World of Warcraft have very modest system requirements but make a ton of money. Web applications are awful from a performance perspective: they are crippled by network latency and need a single server to do the work for many people. Still, they are "the modern way" to write an application. Again, the performance is "good enough".

Finally, even when performance is needed, there are different models for parallel programming than multicore. GPUs are very fast data-parallel "single core"-like devices. At the other end of the spectrum, real-world "web scale" systems scale horizontally and are thereby forced to adopt a multi-process model.

Yes, our OSes should be written for multicore, the newest Doom/Quake should be written for multicore, and our numerical models should be at least aware of multicore; but despite the fact that the C programmer in me will likely be happier in your multicore world than in the world I sketch above, I think most programmers will live in the latter.




GPUs are very fast data-parallel "single core"-like devices.

I don't think that's accurate - being "single core"-like. Besides literally having multiple execution cores, GPUs are radically data-parallel in ways that even those already familiar with data-parallel code (using languages such as OpenMP) must still adapt their thinking. Code that extracts high performance out of GPUs must be aware of the memory hierarchy and degree of parallelism - you can't pretend you're doing sequential programming.


It's definitely not your old CPU - but it's a very different model from the threaded programming common with multi-core programs.

But I agree, "single core" isn't really true.


Keep in mind Moore's law as originally stated was a law of economics, not engineering -- that decreasing feature size means more dice and thus more revenue per wafer, while also increasing the performance of the individual ICs.

These days I think energy usage is the overriding concern, both because of thermal limits in high performance systems, and energy budgets in low-power systems. A smartphone which is fast enough to play Farmville or run a python interpreter is cool and sells well, but there is still a huge economic incentive to cut energy usage further, since that would increase battery life.

The core of your argument seems to be that since things like netbooks, smartphones, Python and Ruby are currently "fast enough", there is little incentive to increase the core count. I disagree with that, because energy savings will (or would) still add massive economic value. Whether these kinds of systems will become reality is a different question, since of course programming them is a very hard problem.

But how cool would it be to have a smartphone with 100 cores, each running at a few MHz or so (and apps which use them efficiently) and battery life of a month or so? Also, think of how motivated companies with datacenters are to reduce energy costs.


In the last couple of years, we have seen the rise of the netbook, the smartphone, and languages like Python and Ruby. Each of these are, apparently, "fast enough" to be highly succesful.

For consumer purposes. However, a ginormous amount of the software in operation today is not aimed at consumers at all, and don't resemble Farmville in the slightest.

Most programmers are writing COBOL (even if they are using a slightly more modern language to do it in.)


I'm not sure I fully understand you, but I'll try to answer: yes, "enterprises" don't run their business off a smartphone, but if they move to (internal) web applications or don't spend the extra funds to make their CRUD desktop applications even faster, they don't use, or need, the multi-core parallel programming model either.

If the above is not an answer to what you meant, could you please clarify?


I mean that these "enterprise" applications (which are not CRUD desktop applications, and are definitely not moving to web applications in my lifetime) run on "big iron", which is multi-processor-based-- and that there will be lots of people working hard to make these platforms more parallel-friendly in the future.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: