Hacker News new | comments | show | ask | jobs | submit login

My email client is pretty much I/O-bound.

My word processor is perfectly well able to keep up with my typing speed.

My web browser is largely I/O-bound, except on pages that do stupid things with JavaScript.

There is no reason to try to rewrite any of these to use funny algorithms that can spread work over tons of cores. They generally don't provide enough work to even keep one core busy, the only concern is UI latency (mostly during blocking I/O).

Compiling things can take a while, but that's already easily parallelized by file.

I'm told image/video processing can be slow, but there are already existing algorithms that work on huge numbers of cores (or on GPUs).

Recalcing very large spreadsheets can be slow, but that should be rather trivial to parallelize (along the same lines as image processing).

...

So isn't the article pretty much garbage?




When I was in 5th grade the Nintendo 64 came out and a friend of mine commented that this was going to be the last major video game console release of all time. After all, the graphics were so mind bogglingly good that no one would ever have any desire to create a better system. The N64 perfectly fulfilled everyone's game requirements.

Your argument is similar. Everything you have on your machine right now works fine as it is. What you aren't taking into account is that the list of things on your machine is defined by the machine's limitations. A new style of computation will open doors and allow things onto your machine that are currently not possible because of limitations in your processor.


Heh, the N64 was the first game console I ever used. I thought it was pretty spiffy. I still think it's pretty spiffy. In fact, looking at some screenshots, I would totally play some of those games over ones we have now.

Ignoring that, I agree with your point wholeheartedly: just because things work now does not mean we can't optimize. In fact, a very large part of the software field is all about optimizing things that already exist.


With a little imagination, you can come up with countless ideas on how to use a thousand cores for each of the applications you listed. I've listed a few, which are likely a bunch of crap, but I'm just one person. There are millions and millions of other minds coming up with ideas better than these every day.

And who are you to say that today's parallel algorithms for solving things like these are optimal? To me, it certainly seems worthwhile to consider possible alternate parallelization models.

> My email client is pretty much I/O-bound. > My word processor is perfectly well able to keep up with my typing speed.

Better speech to text? Better prediction (so you have to type less)? Better compression for attachments? Better grammar checking?

> My web browser is largely I/O-bound, except on pages that do stupid things with JavaScript.

It's clear that web sites are slowly transitioning into a role that is similar to desktop apps, so there will be a million applications that can use hefty computational power.

> Compiling things can take a while, but that's already easily parallelized by file.

Better optimizations? Things like evolutionary algorithms for assembly generation come to mind, which could use any conceivable number of cores. Better static analysis?

> I'm told image/video processing can be slow, but there are already existing algorithms that work on huge numbers of cores (or on GPUs).

GPUs are typically not as good as CPUs for integer math, in particular they're not deeply pipelined, etc. This could conceivably become much faster.

> Recalcing very large spreadsheets can be slow, but that should be rather trivial to parallelize (along the same lines as image processing).

There are a ton of things that could happen here with massive computational power. Spreadsheets that use machine learning techniques to predict things? More powerful models?

>* So isn't the article pretty much garbage?*

Come on, people have been coming up with novel uses for our ever-increasing computational power for well over a hundred years now (from mechanical computers to octo-core desktop machines). Why would they suddenly stop now?


You've failed to consider tons of other applications. What about AI programs like Watson? Also, scientific computing isn't always of the "embarrassingly parallel" sort.


Have you never written a server application?


Yes I have.

Each client connection is entirely independent of the other client connections. That's trivial to parallelize (thread pool, Erlang, whatever).

I also have one where the clients are almost entirely independent; that one connects to a database which handles the "almost" part.


Not all server applications are so easy to parallelize. For example there's the database server itself, which is essentially a box that you're shoving all your data concurrency into hoping that once you start to hit its limits you will be able to rearchitect your app faster than your load is growing.

But maybe you're someone who's happy with the cores and algorithms he already has. That's OK with me. There will certainly always be problems where shared-nothing parallelism over commodity hardware is the most cost effective. But not everyone is mining Bitcoin or computing the Mandelbrot set.


For most applications for a little while I suspect the benefit will be in having an OS that can schedule processes onto more processors. That's nice and will improve the experience for people. OS designers can probably use more daemons as a result to do other fun things. The browser, for its part as some slow, broken version of an operating system, gets to run each page as a process and helps that experience some too (which is already happening with each tab is a process).

I don't think your typical application will have to take advantage of all 200 cores in an explicit way. The use of dynamic languages on the desktop signifies that developers have been willing to throw away performance for development time for a while now. Why is that suddenly going to change, especially with the more difficult concurrent landscape before us?


Watch the video-- the interviewer literally asks the question about application of the technology, and Ungar gives several including business analytics, intelligent cities, associate search, basically anything that has to do with extracting meaningful information from massive data sets.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: