
Ask HN: Beginner's example to running something on GPU? - reacharavindh
Could anyone share a beginner&#x27;s resource to run my first program on a GPU?<p>I&#x27;m a perf analyst with extensive use of Python for scripts, and a decent background in C.<p>I don&#x27;t have a problem in mind. I&#x27;d like an example to try and then think of interesting problems I can use my new skills to solve.<p>PS: I dont have access to a GPU locally. I would provision a GPU instance on AWS and play around.
======
Communitivity
A good example, and something that will help you learn deep learning, is
NVIDIA's Digits program. You can check it out here:
[https://developer.nvidia.com/digits](https://developer.nvidia.com/digits)

I took one of NVIDIA's on site labs at the DC GPU Tech Conference, which used
Digits, and would highly recommend anyone interested in Deep Learning do the
same. Some of their Deep Learning Insitute labs are available online here:
[https://nvidia.qwiklab.com/tags/Deep%20Learning](https://nvidia.qwiklab.com/tags/Deep%20Learning).

Digits is also setup to use their AMIs on AWS, as an easy way to experiment
without your own modern GPU at home.

~~~
reacharavindh
Thanks. I will check this out as well.

------
blackflame7000
If you have an NVidia GPU I would start with learning CUDA. That should help
get you going on the concepts of SIMD (single instruction multiple data)
programming. That's where I started. Additionally, while I'm not quite up to
date on the its current status, OpenCL looked like it was on pace to become a
nice standardized way of talking to accelerators (GPU, FPGA, Co-Processors,
etc).

[https://docs.nvidia.com/cuda/cuda-c-programming-
guide/](https://docs.nvidia.com/cuda/cuda-c-programming-guide/)

~~~
PaulHoule
I think the low level machine learning stuff is going to become a commodity
and might not really be worth your time.

For instance, I worked at a place where people couldn't be bothered to compute
lagrange multipliers to apply constraints, or automate differentiation, etc.
When I showed up they were going and circles and getting the project done was
like pulling teeth. Huge amounts got spent on performance optimization with
SIMD instructions and such, but really they should have used off-the-shelf
algorithms and focused on value add.

The biggest challenges with machine learning are in formulating a problem
which is both possible to solve and useful and in getting training data. Not a
lot of people are doing that because of Kaggleism -- the way that academics
and many would-be learners are competing to get another 0.01% accuracy on a
small number of problems rather than industrializing their practice.

~~~
reacharavindh
I gravitate towards the category of people who is more interested in
formulating a problem and make use of the right tools to get to a clever
solution. But, you need the other category of people who work on getting that
another 0.01% improvement as well equally as much. After all, you can only
industrialize what the academics bring up right?

~~~
PaulHoule
As I see it the connection between academic CS and industry practice is weak.

I quit the ACM because CACM was full of endless hand-wringing about the
problems of tenured faculty and never seemed concerned about the problems
their students would face once they got into the workforce. Similarly, when I
sit in at the A.I. seminar at Cornell often people are giving job talks and I
hear they are going to Stanford, CMU, Google, Microsoft, overall a very short
list of academic and industrial places. None of them are going to work for
startups.

I got really depressed reading years and years of TREC conference proceedings
looking for knowledge about how to create more relevant search. The one thing
I learned was that almost all of the ideas that I thought would improve search
relevance actually don't.

It wasn't until I read

[https://www.amazon.com/TREC-Experiment-Evaluation-
Informatio...](https://www.amazon.com/TREC-Experiment-Evaluation-Information-
Electronic/dp/0262220733/)

that I got enough of a synoptic view to realize that only two significant
developments were made in the first ten years despite the participation of a
huge number of smart people. One of them was the BM25 algorithm which was not
incorporated into the Lucene search engine until decades after it was
discovered and is still rarely used because methods for optimizing the
parameters are rarely used.

