Hacker News new | past | comments | ask | show | jobs | submit login
Introduction to OpenCL (realworldtech.com)
20 points by ssp on Dec 11, 2010 | hide | past | favorite | 10 comments



OpenCL is the most verbose, awkward monster I have ever seen. No exaggeration. The "Hello World" of GPGPU in OpenCL is over 100 lines long.

Use CUDA directly. Get access to new features as the chips ship and a much nicer interface. NVIDIA is so far ahead in terms of GPGPU it's worth the vendor lock-in.


The C API is pretty verbose, but it's not poorly designed. Creating an expressive wrapper with a higher level language is pretty straightforward: https://github.com/ztellman/calx.


I suppose it depends what your application is.

If you're crunching numbers for a specific task, this could be the best option. However, I think increased use of the GPU is exciting because it can enhance the computing power of a broad range of consumer-orientated applications.

If you're producing an application that will be used by a broad range of consumers, perhaps a vendor lock-in would prove to be short-sighted?


>NVIDIA is so far ahead in terms of GPGPU it's worth the vendor lock-in.

I am dubious. But I'll admit that this isn't a field I'm experienced with.

It looks like AMD GPUs are in one of the top500 clusters, compared to 7 with nvidia GPUs, so that lends some strength to your statement. But I'd rather write for an open architecture and move my code to whatever happens to be the fastest running it then write for a particular vendor and hope they keep up.

Betamax was (slightly) superior to VHS, and it didn't work out so well.

I guess I don't really understand how vendor lock-in would ever be acceptable in a market that exploded due entirely to the commoditization of the personal computer. Okay, CUDA is more pleasant to code in now now, but what happens if AMD's GPUs start trouncing nvidia's? Or what if Intel figures out how to make Larrabee competitive? What if Oracle buys nvidia and starts charging extra for drivers that support CUDA?


This is much more clear than that. The entire community is rallying around CUDA & NVIDIA. Never once have I been talking to someone about GPU computing (and I work with it) and has someone mentioned AMD chips. NVIDIA holds the GPU Technology Conference each year that everyone goes to, and all the GPU books are being written about CUDA. Every Mac has a CUDA-capable GPU inside. NVIDIA has been collaborating with universities to get them devices so students can use them. AWS offers Fermi (NVIDIA's next-gen architecture) GPUs are a service. Autodesk and mental images have teamed up with NVIDIA to do CUDA-specific acceleration of their products. In the past 2 years of working on GPGPU, I've heard ATI mentioned four or five times, and then only by lay press.


That's definitely interesting, I'm certainly curious to see how that plays out down the line.


Yes. But even CUDA is pretty verbose and weird in places.

CUDA C compiles to PTX, an assembly-ish intermediate language. You could target it with much nicer HLLs to hide a bunch of clunky stuff like cudaMemcpy. Unfortunately it seems to be such a moving target right now that all the non-C interfaces I’ve seen are either very incomplete or use the C internally.


There are actually some very interesting efforts around building languages that compile to PTX ("Parallel Thread Execution"). NVIDIA treated it as a trade secret in the beginning and so only now is that really beginning to take off. The introduction of C++ support with Fermi took a little bit of the pressure off, but I bet we'll see some new parallel-oriented languages in the next few years.


Why base it on C? Programming parallel systems is a new paradigm, one that is distinctly different from the imperative world of C. Just compare the example FFT on wikipedia (https://secure.wikimedia.org/wikipedia/en/wiki/Opencl) with for example the StreamIt implementation (http://groups.csail.mit.edu/cag/streamit/apps/benchmarks/fft...).

Alright. Damn. That isn't actually that much better. Does anyone know a language that has an elegant yet efficient parallel implementation of the FFT?


Maybe they should get a different name for this; I thought it was an open source implementation of Common Lisp.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: