TPUs are only one part of this eye-opening presentation. Skip to page 28, where Jeff starts talking about:
* Using reinforcement learning so the computer can figure out how to parallelize code and models on its own. In experiments, the machine beats human-designed parallelization.
* Replacing B-tree indices, hash maps, and Bloom filters with data-driven indices learned by deep learning models. In experiments, the learned indices outperform the usual stalwarts by a large margin in both computing cost and performance, and are auto-tuning.
* Using reinforcement learning to manage datacenter power. Machine intelligence outperforms human-designed energy-management policies.
* Using machine intelligence to replace user-tunable performance options in all software systems, eliminating the need to tweak them with command line parameters like --num-threads=16, --max-memory-use=104876, etc. Machine intelligence outperforms hand-tuning.
* Using machine intelligence for all tasks currently managed with heuristics. For example, in compilers: instruction scheduling, register allocation, loop
nest parallelization strategies, etc.; in networking: TCP window size decisions, backoff for retransmits, data compression, etc.; in operating systems: process scheduling, buffer cache insertion/replacement, file system prefetching, etc.; in job scheduling systems: which tasks/VMs to co-locate on same machine, which tasks to pre-empt, etc.; in ASIC design: physical circuit layout, test case selection, etc. Machine intelligence outperforms human heuristics.
IN SHORT: machine intelligence (today, that means deep learning and reinforcement learning) is going to penetrate and ultimately control EVERY layer of the software stack, replacing human engineering with auto-tuning, self-improving, better-performing code.
Ok, I can understand how a bloom filter can be replaced by a neural network predictive model. You could actually train it while stuff gets added. This would make adding somewhat more expensive, but ...
Ah so it appears they're advocating using neural networks as index functions to sorted arrays (hashmaps are simply sorted by hash instead of by something in the data).
So what they do is they take a FIXED set of data that you want to quickly lookup in, already sorted, train a model (2 layer 32 width, relu activation is one architecture, but they also train sequences of models, HUGE changes to error (as the cost of max and min error are huge, you minimize max error rather than average error)).
They have the following brilliant insight : an index over a database (which gives the position of the data given the search key) is a CDF (cumulative distribution function) ! That's brilliant ! Of course it is !
And of course, this is Google. Once you have an index trained (which is a linear operation), you can translate the neural network model directly into C++, and compile it into machine instructions that don't depend on anything like tensorflow libraries. The resulting code can be pasted into anything you want. This may work fast, but seems less then entirely practical ... although I guess you could do the same in Java far easier and you could just include that code.
In case anyone wants to check out some pre-history, back in 2002 Manfred Warmuth et al.[0] were using learning (Weighted Majority) to drive systems components like cache replacement policy. I'm not sure where the work went from there, but add it to the pile of techniques.
Thanks for the link. Very interesting. I found this [1] from 2015.
Reading your cite, the practical issue seems to me to be that the optimizer's memory footprint costs may in fact negate any benefit (e.g. ~40% over LRU) obtained in reducing cache misses.
My gut feeling is that this approach (for online systems) may work best with a hardware component (a card hosting the 'experts' and their virtual model e.g. the "virtual cache"). The distributed variant also seems worth exploring.
Good summary. Some systems groups are already going in this direction. PeletonDB is trying to use DL to build a self-tuning DB https://github.com/cmu-db/peloton
We have been trying to self-tune resource management decisions in Hadoop YARN using deep learning.
So basically it will replace all heuristics/greedy optimization algorithms. I am wondering if ML can come up with better sorting algorithms, or I guess when you can use ML for end strategy of optimization you don't have to sort!
> A very senior Microsoft developer who moved to Google told me that Google works and thinks at a higher level of abstraction than Microsoft. “Google uses Bayesian filtering the way Microsoft uses the if statement,” he said
It'd be really interesting to know the per-unit math on that.
Designing and taping out a new ASIC isn't cheap.
Presumably Google needs to use a fairly recent process (22nm or better?), which means GlobalFoundaries/TSMC or Samsung (do any of the Chinese native fabs have 22nm yet?). I wonder who us building them?
The TFLOPS numbers are not directly comparable. The TPUs use reduced precision in some areas, whereas I am guessing the Titan V numbers are based on single precision operations.
If all you need is reduced precision, then it’s fair to compare the two. I’d also assume memory bandwidth matters just as much as TFLOPS for ml workloads.
Great talk, with lots of new insights into what's happening at Google. I really think his point that ImageNet is the new Mnist now holds true. Even research labs should be buying DeepLearning11 servers (10 x 1080Ti) for $15k, and training large models in a reasonable amount of time. It may seem that Google are way ahead, but they are just doing synchronous SGD, and it was interesting to see the drop in prediction accuracy from 128 TPU2 cores to 256 TPU2 cores for ImageNet (76 -> 75% accuracy). So, the algorithms for dist. training aren't unknown, and with cheap hardware like the DL11 server, many well-financed research groups can compete with this.
On a DL11 server, it will take about 60 hrs, and only cost you 15k upfront. The economics speak for themselves for fp32 training, at this moment in time.
Great presentation. Far as application, I already thought this might be useful in lightweight, formal methods to spot problems and suggest corrections for failures in Rust's borrow checkers, separation logic on C programs, proof tactics, and static analysis tooling. For Rust example, the person might try to express a solution in the language that fails the borrow checker. If they can't understand why, they submit it to the system that attempts to spot where the problem is. The system might start with humans spotting it and restructuring the code to pass borrow checker. Every instance of those will feed into the learning system that might eventually do that on its own. There's also potential to use automated, equivalence checks/tests between user-submitted code and the AI's suggestions to help human-in-the-loop decide if it's worth review before passing onto the other person.
In hardware, both digital and analog designers seem to use lots of heuristics in how they design things. Certainly could help there. Might be especially useful in analog due to small number of experienced engineers available.
I speculate that Google will sell TPUv2 for as less as 500 USD per PCIe card already in 2018. Nvidia's Volta TensorCores are essentially the same: 32-bit accumulators and 16-bit multipliers, but GPUs are more general-purpose which is not necessary for Deep Learning since most intensive operation is dot-product (y+=w*x).
I haven't read that paper (Learned Index Structures), but things like gperf have existed for decades. Are these enhanced data structures dynamic, i.e. unlike gperf which is a static one, does it reoptimize as you insert new elements?
In the case of the hash table, I assume it's using the model to compute the hash function.
No, it doesn't handle inserts. On the other hand, the paper writes:
"An ... approach to handling inserts is to build a delta-index. All inserts are kept in buffer and from time to time merged with a potential retraining of the model."
Under some assumptions it does handle inserts. From [1]:
Finally, assume that the inserts follow roughly a similar pattern as the learned CDF; [...] Under these assumptions the model might not need to be retrained at all.
I thought it was a stretch when reading that medium post. Now reading this and thinking about it after finishing two undergraduate classes one on operating systems and another on compilers, the machine learning for systems part makes a lot of sense, apart from the heuristics the learned index structures idea is just fascinating.
Another illuminating sentence from the paper was this:
>This leads to an interesting observation: a model which predicts the position given a key inside a sorted array effectively approximates the cumulative distribution function (CDF). We can model the CDF of the data to predict the position as: p = F(Key) ∗ N
Maybe it's just my very limited knowledge as an undergrad but I'm feeling that this can be the start of something big. Another idea that just came to me after is how much of this ML is applicable to the domain of cryptography. In my security class it seemed like much of the famous hash functions for example were somehow "found" in vast space of potential schemes.
Yeah, it is funny that ML, which is biggest fears driven by the media as the ultimate job destroyers, would put this generation of programmers in danger as well.
However, the paradigm shift is inevitable, once discovered, people will use it, and use it anywhere possible.
* Using reinforcement learning so the computer can figure out how to parallelize code and models on its own. In experiments, the machine beats human-designed parallelization.
* Replacing B-tree indices, hash maps, and Bloom filters with data-driven indices learned by deep learning models. In experiments, the learned indices outperform the usual stalwarts by a large margin in both computing cost and performance, and are auto-tuning.
* Using reinforcement learning to manage datacenter power. Machine intelligence outperforms human-designed energy-management policies.
* Using machine intelligence to replace user-tunable performance options in all software systems, eliminating the need to tweak them with command line parameters like --num-threads=16, --max-memory-use=104876, etc. Machine intelligence outperforms hand-tuning.
* Using machine intelligence for all tasks currently managed with heuristics. For example, in compilers: instruction scheduling, register allocation, loop nest parallelization strategies, etc.; in networking: TCP window size decisions, backoff for retransmits, data compression, etc.; in operating systems: process scheduling, buffer cache insertion/replacement, file system prefetching, etc.; in job scheduling systems: which tasks/VMs to co-locate on same machine, which tasks to pre-empt, etc.; in ASIC design: physical circuit layout, test case selection, etc. Machine intelligence outperforms human heuristics.
IN SHORT: machine intelligence (today, that means deep learning and reinforcement learning) is going to penetrate and ultimately control EVERY layer of the software stack, replacing human engineering with auto-tuning, self-improving, better-performing code.
Eye-opening.