Hacker News new | comments | show | ask | jobs | submit login

Doing anything large on a machine without CUDA is a fool's errand these days. Get a GTX1080 or if you're not budget constrained, get a Pascal-based Titan. I work in this field, and I would not be able to do my job without GPUs -- as simple as that. You get 5-10x speedup right off the bat, sometimes more. A very good return on $600, if you ask me.



If you're budget constrained, a cheaper card will still get you massive improvements. I'm on a gtx 970, and it far outstrips the cpu. Even a gtx 650 (about £65) should outperform a cpu.


For a student doing this in their free time, 600$ can be a huge sum, usually two months rent.

That’s not easily paid.


Buddy up with a professor/grad student/research group who will let you steal some cycles.


That’s just so simple, eh? Except, if hundreds of students and research groups require cycles, and the combined computer cluster of several universities is already booked out for months (and wouldn’t even support TensorFlow in the first place, so I’d have to write my own system or port everything to the systems that support supercomputer architectures).


Spending 10 months instead of 1 or 2 and not getting anywhere is also not free.


The problem is that as a student I just can not, in any way, get the money for a 1080. The choice is spending 10 months, or not even starting.


[flagged]


Please don't snipe at people. If you don't have something constructive and civil to say, please just don't comment.


I didn't snipe. This is life advice. Get a part time job. If you're in North America (Canada or US) and you can't squeeze out $400-600 from your budget over the period of a year, you're making excuses, pure and simple. Cook at home, drop cable subscription, don't go to Starbucks, do part time jobs, and so on and so forth. That's what students did back when I was one.


I’m in Germany, have a part time job, cook at home, have no cable subscription, don’t go to starbucks, buy food if possible at ALDI.

Rent has been going up every year, but wages haven’t, so by now for me, rent for a small apartment is over 60% of my monthly income.

The world has changed quite a bit since you were a student.


Might not be applicable for you, but maybe see if you can get a grant from your school? I'm going to try that this quarter to see if I can get some gpus


My student years were in Moscow in early 90's amid chaos and hyperinflation. Even there, I would be able to scrape together a few hundred bucks for something that's critical to my academic success. Get a roommate, maybe? Find a better paying part time job? I'm having a hard time imagining someone who can program and who writes papers about machine learning not being able to find a decent paying part time job anywhere in the world.


> for something that's critical to my academic success

Well, that’s the problem. It’s not. I can pass this and get the same grade no matter if I have good results or not. With CPU training, the paper just ends up ignored.

If it was critical for my success, I’d definitely find a way to do it, but that’s the point, it’s not critical – it would just improve my results.


You're confusing your grade with academic success. Those two things are related, but not the same. If this is something you want to pursue, go after it. Grades matter, but they're not the only thing that matters.


I’m still in undergrad/studying for B.Sc., so it’s not like I’m expected to do actual science, but you’re right indeed.


There was a thread recently, maybe on reddit, about a facebook group or somewhere people give away ec2 or azure credits to people who want to do HPC or deep learning or whatever. But i can't google it.


I just spent 3 months CPU-training a network for a paper I was writing, if I had any other options, you can be sure, I’d have used them.


Or set up an EC2 GPU unit - spot prices are usually in the sub $.20/hour range.


EC2 GPUs are slower to train than local hardware and more expensive long term. The upside is being able to scale much more easily, but I'd definitely recommend a good consumer grade GPU over EC2 if you're planning on using it for months as opposed to days.


They can also be unceremoniously preempted in the middle of your week long training run.


And why Pascal based Titan? Is it the best investment in terms of performance per $ spent?

Also, how cost effective is it to use cloud GPUs for real world machine learning?


Cloud GPUs are cost effective if you need to either fine-tune a pretrained network (eg, use pretrained ResNet/VGG/AlexNet for custom classes, ie[1]) or for inference, or if you don't want the upfront costs.

A 4GB GTX1050 is ~$180. A p2 instance on Amazon is $0.9/hour. The cost effectiveness depends on if you have a PC already.

[1] https://blog.keras.io/building-powerful-image-classification...


And the cost of your electricity, don't forget.


Current mean cost to US domestic consumers of $0.12 kW/h, says Google.

TPD Specs: GTX 1050 75W GTX 1080 180W Titan 250W

Triple these numbers for the overall machine's PSU spec.

So even the Titan costs $0.03 per hour to run, or maybe $0.10 if the rest of the machine is flat-out.

edit: $0.19 kWh for San Francisco residents. Domestic rates.


Titan XP is the maximum single chip performance you can buy right now. $1200 is well worth it if ML is part of your career. The time saved will pay for it.


Cloud GPUs are not economical if you use them 24x7x365 (which for any serious deep learning researcher or engineer is usually the case). The only scenario I can think of in which they'd be more economical than something under your desk is when you need to run a massive and embarrassingly parallel workload. I.e. try training dozens of models at the same time with different hyperparameters, and run that for a few days. You could do it cheaper, but it would take a long time and it would be a massive pain in the ass, so you pay the pretty penny and get it done in a week.

For my needs I have a machine with a 2011-v3 socket, and four GTX1080 GPUs. Warms up my man cave pretty nicely in winter. I also have access to about a hundred GPUs (older Titans, Teslas, newer 1080s and Pascal Titans) at work that I share with others.

Now, regarding Titan. Titan is actually not that much faster than GTX1080, so in terms of raw speed there's no reason to pay twice as much. BUT, it has 4GB more RAM, which lets you run larger models. NVIDIA rightly decided that for a $100+/hr deep learning researcher $600 is not going to be that big of a deal, and priced the card accordingly. If your models fit into 8GB, you'll be better off buying two 1080's instead.

As to me, I'm thinking of replacing at least one of my 1080s with a Titan, to be able to train larger models.

On a purely TFLOP or even TFLOP/watt basis, it doesn't currently make sense to buy anything that doesn't run Pascal.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: