I have perfectly good monospaced fonts on this computer, but none of them are Consolas, Menlo, Monaco, or Lucida Console, so I end up with a default proportional serif font.
If the keyword monospace is somewhere in your font stack some browsers use 13px as default font size instead of the usual 16px. The workaround used to be to use both, serif and monospace, in your font stack. This used to work some time ago, I don't know if it still does with contemporary browsers.
Am I missing something?
If you have your data in a Hadoop cluster and are doing image recognition, Yahoo's Cafe on Spark is the only truly distributed engine out there. It uses MPI to share model state between executors.
There's also data parallelism with parameter averaging which we've been doing in deeplearning4j for the last few years. We also support ALOT more than just images. We have the ETL pipelines (kafka etc) to go with it. Watch for a blog post from us on parallel for all (nvidia's blog) where we explain some of this.
I gave a framework agnostic view of the concepts you should consider when looking at distributed deep learning as well:
Their blog post in April mentioned it - https://research.googleblog.com/2016/04/announcing-tensorflo...
That said, I haven't actually attempted any distributed processing, but it looks possible. If anyone has actually tried it and can speak to it I would be curious to what people with experience have to say about it.
That implementation requires starting individual tasks on each node in your cluster.
>To create a cluster, you start one TensorFlow server per task in the cluster. Each task typically runs on a different machine, but you can run multiple tasks on the same machine (e.g. to control different GPU devices).
I'm used to using tools that can roll out to a cluster with more finesse than that. The Spark wrapper seems to provide some capability to do this automatically, but even the Spark wrapper requires installing python libraries on each node.
Since TensorFlow has native dependencies on CUDA stuff for GPU support, I don't think there's much of a way to get around installing things on every machine. You might be able to package a python env without CUDA for spark to run using conda. Here's an interesting blog post about that: https://www.continuum.io/blog/developer-blog/conda-spark
But I'm not sure I see the point in running TensorFlow without GPU support. And if you're hoping to run GPU machines on an existing spark cluster and intelligently allocate the GPU stuff to the right machine. . . that's gonna be tough. Here's an interesting talk on that from the last spark summit: https://www.youtube.com/watch?v=k6IOWblLQK8&feature=youtu.be
Ultimately, you're probably better off just running your own gpu cluster strictly for your TensorFlow model on ephemeral AWS spot instances.
Or just use Google Cloud Machine Learning. That's what Google wants and expects you to do anyway. Borg is the Borg. You will be assimilated.
What you want to be able to do is control which devices (CPUs, GPUs or co-processors) execute which part of your model (eg, GPU for training, co-processors for inference, who knows what else).
Yahoo released some code to deal with similar issues, but with Caffe on YARN.
TensorFlow is admirably easier to install than some other frameworks
# Ubuntu/Linux 64-bit, CPU only:
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl
# Ubuntu/Linux 64-bit, GPU enabled. Requires CUDA toolkit 7.5 and CuDNN v4. For
# other versions, see "Install from sources" below.
$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl
That said, I haven't played around with AI frameworks too much, so I might just be missing a real stinker.
I found these helpful (on AWS)
of course you don't need to install CUDA just to learn, can run tensorflow on CPU only, but part of the point of the graph paradigm is to design a computation and offload it to GPU.
It honestly looks like pretty cool stuff, looking forward to having time to play around with it some day.
By comparison, here's what you need to install (manually!) for Torch on OSX:
CUDA (and there's a whole other thread trying to get that to work..)
This learning path is also available free for Safari Books Online subscribers.
Is this planned to be released as an intro in a book about tensorflow?
Could be a nice little loop if part of the creators of the AI that would accomplish this learned part of their craft from O'Reilly books.