My word processor is perfectly well able to keep up with my typing speed.
There is no reason to try to rewrite any of these to use funny algorithms that can spread work over tons of cores. They generally don't provide enough work to even keep one core busy, the only concern is UI latency (mostly during blocking I/O).
Compiling things can take a while, but that's already easily parallelized by file.
I'm told image/video processing can be slow, but there are already existing algorithms that work on huge numbers of cores (or on GPUs).
Recalcing very large spreadsheets can be slow, but that should be rather trivial to parallelize (along the same lines as image processing).
So isn't the article pretty much garbage?
Your argument is similar. Everything you have on your machine right now works fine as it is. What you aren't taking into account is that the list of things on your machine is defined by the machine's limitations. A new style of computation will open doors and allow things onto your machine that are currently not possible because of limitations in your processor.
Ignoring that, I agree with your point wholeheartedly: just because things work now does not mean we can't optimize. In fact, a very large part of the software field is all about optimizing things that already exist.
And who are you to say that today's parallel algorithms for solving things like these are optimal? To me, it certainly seems worthwhile to consider possible alternate parallelization models.
> My email client is pretty much I/O-bound.
> My word processor is perfectly well able to keep up with my typing speed.
Better speech to text? Better prediction (so you have to type less)? Better compression for attachments? Better grammar checking?
It's clear that web sites are slowly transitioning into a role that is similar to desktop apps, so there will be a million applications that can use hefty computational power.
> Compiling things can take a while, but that's already easily parallelized by file.
Better optimizations? Things like evolutionary algorithms for assembly generation come to mind, which could use any conceivable number of cores. Better static analysis?
> I'm told image/video processing can be slow, but there are already existing algorithms that work on huge numbers of cores (or on GPUs).
GPUs are typically not as good as CPUs for integer math, in particular they're not deeply pipelined, etc. This could conceivably become much faster.
> Recalcing very large spreadsheets can be slow, but that should be rather trivial to parallelize (along the same lines as image processing).
There are a ton of things that could happen here with massive computational power. Spreadsheets that use machine learning techniques to predict things? More powerful models?
>* So isn't the article pretty much garbage?*
Come on, people have been coming up with novel uses for our ever-increasing computational power for well over a hundred years now (from mechanical computers to octo-core desktop machines). Why would they suddenly stop now?
Each client connection is entirely independent of the other client connections. That's trivial to parallelize (thread pool, Erlang, whatever).
I also have one where the clients are almost entirely independent; that one connects to a database which handles the "almost" part.
But maybe you're someone who's happy with the cores and algorithms he already has. That's OK with me. There will certainly always be problems where shared-nothing parallelism over commodity hardware is the most cost effective. But not everyone is mining Bitcoin or computing the Mandelbrot set.
I don't think your typical application will have to take advantage of all 200 cores in an explicit way. The use of dynamic languages on the desktop signifies that developers have been willing to throw away performance for development time for a while now. Why is that suddenly going to change, especially with the more difficult concurrent landscape before us?