That’s not easily paid.
Rent has been going up every year, but wages haven’t, so by now for me, rent for a small apartment is over 60% of my monthly income.
The world has changed quite a bit since you were a student.
Well, that’s the problem. It’s not. I can pass this and get the same grade no matter if I have good results or not. With CPU training, the paper just ends up ignored.
If it was critical for my success, I’d definitely find a way to do it, but that’s the point, it’s not critical – it would just improve my results.
Also, how cost effective is it to use cloud GPUs for real world machine learning?
A 4GB GTX1050 is ~$180. A p2 instance on Amazon is $0.9/hour. The cost effectiveness depends on if you have a PC already.
GTX 1050 75W
GTX 1080 180W
Triple these numbers for the overall machine's PSU spec.
So even the Titan costs $0.03 per hour to run, or maybe $0.10 if the rest of the machine is flat-out.
edit: $0.19 kWh for San Francisco residents. Domestic rates.
For my needs I have a machine with a 2011-v3 socket, and four GTX1080 GPUs. Warms up my man cave pretty nicely in winter. I also have access to about a hundred GPUs (older Titans, Teslas, newer 1080s and Pascal Titans) at work that I share with others.
Now, regarding Titan. Titan is actually not that much faster than GTX1080, so in terms of raw speed there's no reason to pay twice as much. BUT, it has 4GB more RAM, which lets you run larger models. NVIDIA rightly decided that for a $100+/hr deep learning researcher $600 is not going to be that big of a deal, and priced the card accordingly. If your models fit into 8GB, you'll be better off buying two 1080's instead.
As to me, I'm thinking of replacing at least one of my 1080s with a Titan, to be able to train larger models.
On a purely TFLOP or even TFLOP/watt basis, it doesn't currently make sense to buy anything that doesn't run Pascal.