Hacker News new | past | comments | ask | show | jobs | submit login
How Google Is Remaking Itself for “Machine Learning First” (backchannel.com)
274 points by steven on June 22, 2016 | hide | past | favorite | 116 comments



I don't believe in "everyone should work on machine learning". I worked on several deep learning models but I don't really like it. It is a very different job than software engineering in my opinion. ML is more about gathering data and tuning the models as opposed to building stuff. I have spent months working on models and barely wrote any code. It is more efficient to have ML experts focus on the modeling and software engineers use the model.

I do believe however that some experience is needed to understand what is possible and best benefit from existing tools or to be able to communicate with machine learning engineers about your needs.


I concur. ML isn't programming per se; it is experimental problem-solving with a particular dataset and algorithm. Your result may/not work well, may/not generalise, and will almost undoubtedly not contribute anything new to any discipline, even to ML. When all ML work is done we'll have great pattern recognizers but nothing remotely akin to thought. And we won't understand how they work or the best way to build the next one. It isn't AI, although it is a part of AI, just as the visual system is part of AI.

I was reading Domingos' "The Master Algorithm" several days ago and a mathematician inquired about the book. He knew a group of ML developers. His opinion was that "ML doesn't look very interesting: all you do is play with the parameters, turn the knobs, and/or change the model until something works. There's no real progress there; nothing substantial."

Rather than sending a batallion of bright developers into the ML swamp where they will largely be frustrated, learn little and contribute less, I'd be tempted to guide them into other fields.


I am a mathematician by trade, and was doing development along with other stuff (reverse engineering and security work, first in my own company, then at Google). So ...

1) I think working knowledge of ML is extremely useful to many developers, and generally under-taught in universities. See the old Joel article which mentions "Google uses Bayesian filtering like MS uses the IF statement" http://www.joelonsoftware.com/items/2005/10/17.html). A well-rounded developer should know the basics (Logistic regression, SVMs, know some things about CDNNs etc.), it will make him much more adept at problem-solving. I suspect Google's internal push to get people up to speed is not to turn them all into ML researchers, but rather to make sure that everybody "knows the basics well enough".

So I think it is useful to teach developers about the things ML has to offer.

2) Mathematically, it seems that in ML the "engineering" side has run far ahead of the theory side. The sudden breakthrough in the mid-2000s is IMO still not fully understood - and parts of it may have been very accidental. Initially, it was thought that pre-training was the big breakthrough, but it is quite unclear what the big breakthrough was. It could be that simply the increase of data / compute sizes and the switch to minibatch-SGD explains why modern DNNs generalize well (interesting paper on the topic: https://arxiv.org/abs/1509.01240). There is a lot of good mathematics to be written, but I am not sure whether the folks at Google will write it - given the incentive structures (performance reviews, impact statements) it is unlikely that somebody gets promoted for "cleaning up the theory".

3) From a development perspective: There are a ton of interesting engineering problems underneath the progress in ML. If you look at Jeff Dean, he is a superstar engineer, not necessarily a mathematician, and a lot of the progress the Google Brain team made were engineering advances to scale / distribute etc. - so by training the engineers in ML, you also get to have better infrastructure over time.

So I don't think they are sending "developers into ML swamps"; I think they are trying to reach the point where "Google uses DNNs like MS uses IF".

Cheers, Thomas


I don't think your points are invalid, but I think you overvalue the data that's available and relevant to most programming tasks. And without novel data, ML can offer little novel value.

Google, Facebook, M$ Research, and perhaps Yahoo are extreme outliers. They have zottabytes of broad unstructured text data, so they mine it. Everybody else has megabytes of narrow structured data, most of it commercial transations of their products. That stuff has already been effectively mined by traditional basic OLAP methods. Most/all of the value has been extracted.

Mainstream software apps have yet to show the value of using ML. Such apps have access to very limited data of very narrow relevance. The utility of ML in such domains isn't new; it's classic optimization. Or it's bayesian anticipation. But it's not a game changer. Frankly, the use of ML in most mainstream apps is more likely to add distraction and annoyance as the computer mispredicts your intent -- like Microsoft Bob did.

Maybe "life in the cloud" will create new opportunities for smarter software. But I definitely don't want free apps making their own decisions when to notify me. I guarantee that will get old immediately. So how will this work? Frankly, I can't guess. Like Apple's iAds, programming ML into the mainstream or cloud sounds like an idea that will serve the software / cloud vendor far better than the user.


I don't know why randcraw is being downvoted here: his/her points are vital clarifications.

Humans have been gathering and analyzing data for thousands of years. We have _not_ waited for Google's latest ML or neural nets to do analyses. Otherwise I'd be carving this post onto a stone for future generations to peruse.

The valuable and understandable AI, the step that will make a difference, isn't in "big data" - it's in figuring out how to do what those humans have been doing all those thousands of years.


> I don't know why randcraw is being downvoted here

I can't speak for anyone else, but "M$"


Think outside consumer-facing applications. Medicine, biology, geology (oil, gas, and mining), finance, transportation. Tons of data, tons of dollars, and important problems.


I work in a big pharma analyzing image and experimental data. In a prior life I analyzed social cliques from vast numbers of user transactions. In both cases it seems like greater volumes of data should lead to deeper insights. But as it happens, the amount of useful actionable information in that data was surprisingly limited.

Often the available sensors/assays failed to detect reliable info. Or the phenomenon of interest interdepended on too many variables expressed with too great a dynamic range for us to detect reliably or model usefully. (The present lull in genomics R&D illustrates this well, as do automated interpretation of signals like EEG and NMR spectra.) And the signals that we can extract are often uninterpretable or sporadic. Alas, gathering more data won't yield more signal. Given the present limit on sensor resolution, you just get more mixed signals.

The potential of all ML is limited by the depth of the data that are essential for the discrimination of subtler signals. In the domains you mention (medicine, biology, geology, other sciences) I'm convinced we need better sensors more than greater amounts of the same data available now. We need better hypotheses which lead to better ideas of where to look and what to look for. In general, ML can't help with that. Until we better imagine how the mechanism might work, our questions remain too vague.

To wit, I'm afraid that applying ML to most software apps will suffer from the same limited ROI. I suspect that most app and user data is too shallow for mining to add appreciable value, no matter how clever it is.


Most of that data isn't "big data". And most of it has been analyzed thoroughly. Sure, ML will be used to re-analyze it, but with mostly the same results. As randcraw states "Most/all of the value has been extracted."

Only a wild-eyed ML "gold digger" could imagine that there is a vein of gold in those mines. The reality is that, with few exceptions, we'll find more lumps of coal.

Perhaps I should switch from an ML swamp metaphor to an ML mine metaphor? <--Hah! Do that with ML!


I think it depends on the job. Maybe a web-developer has lesser gain from extensive knowledge in ml. But I agree that every computer scientist (whether he works as an software engineer or not) should have some knowledge of ml, there are many things in the curriculum that are not as important as ml.

As a snarky remark: Maybe i am not yet qualified enough for real criticism as an cs-student, but i don't like it such sharp destinations between engineering and theory. All the "trial an error" in ml can be a useful guide to solving the theory. Also i guess the work of Jeff Dean is quite often more theoretical as the work of an average engineer. While i feel that if we have not developed a theory behind such tools, we have not really understood them, no one knows how komplex these things really are. I think/feel this makes ml-related engineering harder than software projects with a well understood theory

I just hope there are enough computer-scientists/mathmaticians at universities (or google ;) ) sharply looking on all the progess made in ml from the engineering side and asking themselves "what does that really mean?", because thats a hell of an interesting problem.

I may be wrong, my lecture on ml is next semester ;)


Or perhaps Google's ML career path is largely a ruse, a Golgafrinchan Ark Fleet Ship B, that Google is using to trim a bloated developer pool?!8-))

http://hitchhikers.wikia.com/wiki/Golgafrincham


> And we won't understand how they work

Is this a critique of the human mind or a praise of AI?

> When all ML work is done we'll have great pattern recognizers but nothing remotely akin to thought

Maybe our brains too are nothing but pattern recognizers. Maybe they are nothing but chemical reactions, or energy fields. But being reductionist about AI won't help us understand it either.


What we need is models that are more retrospectable - so that you can find the rules that it learned. Most of the time they will be too complex for any human understanding - but from time to time we'll find something interesting, something that we can build other things upon.

I have never used neural nets etc - but with simplified bayes spam filters this was possible and quite useful. I used to check which words were pushing a text into one or other category and which did not (when they should).


What fields would you guide them to instead?

I may be one of the developers you speak of (with academic aspirations), presently considering my path forward.

I'm sceptical if going down the ML swamp is the best way forward.


I don't know. I know some engineers who have spent months going back-and-forth over communication protocols while barely writing any code, yet somehow their job is considered to be quite core to software engineering. I don't really see how fine-tuning communication protocols is fundamentally different from fine-tuning machine learning models. But overall, I agree with your sentiment: different things are different and appropriate for different people.


Wouldn't that be more akin to the design of the models?


Hi arbre, would you mind explaining what is possible and what benefits from existing tools in machine learning at the moment? I am clueless and find ML rather frustrating to get into.


I meant that learning about ML and getting some field experience helps you figuring out when to use ML and how. For how to get into, there are a lot of resources and state of the art algorithms/papers/implementations are freely available. For me working on ML projects at my job and talking to some experts was ideal, but I am sure it is possible to learn on one's own with enough motivation. Good luck!


Ah, yes, I understood what you meant (and thank you for pointing to where I should look next!). I was hoping, too, that you might share your ML knowledge in layman's terms.


Absolutely. In general, ML needs a collaboration between ML expertise and application domain expertise. It's very helpful if there's someone who can help bridge those two - enough app experience to understand the domain deeply, and enough ML experience to know what questions to ask of the ML gurus and what pitfalls to expect. As I see it, that's one of the goals of the ML ninja program.


This is how software eats your job.


> "everyone should work on machine learning"

> software engineers use the model.

You aren't disagreeing.


Sounds like a job machines could do..


"Moving data around" is what a lot of software engineering is these days. Facebook, Google etc. are more data companies than software companies (and probably close to media than communcations companies).


Articles like this for me tend to vindicate Google's notorious hiring processes.

While it is true that for most people will not need to be able to whiteboard a binary tree inversion in their day to day, it seems like they expect their engineers to be able to throw themselves at any problem they're given and require them to be able to pivot in skillset quickly, and have an appreciation of all the developments going on around them so they can apply anything novel ideas developed internally to what they are currently working on.

In those cases, hiring based on sound knowledge of CS fundamentals seems like a good bet...

60k engineers is a pretty terrifying number though.


I'm skeptical nevertheless. In my experience, most programming is very different than r+d, which often does require significant concentrated training or even the smartest will spin their wheels.

It's hard to describe, but research (which the vast majority of ML remains) is something that even a sound knowledge of fundamentals might not remotely be enough.


60k is total number of full-time employees. It includes non-engineers, and does not include contractors.


Google's largely moved away from those BS questions. They just bias towards people who memorize answers on Leetcode, but aren't actually capable of producing anything.


I know two people who've interviewed at Google in the past three months and have received a full slate of computer science homework problems.


Memorizing answers to algorithm questions is a poor time investment. I don't see a lot of people doing it. Most smart folks just learn how to design algorithms on the fly, that's much easier and more useful.


> Articles like this for me tend to vindicate Google's notorious hiring processes.

No, because they have rejected ML experts if they can't do their stupid dog & pony show.

> hiring based on sound knowledge of CS fundamentals seems like a good bet...

Too bad many of them can't get their heads around the ML math.


"probably almost half of its 60,000 headcount are engineers"


Anyone happen to have a suggested self-teaching path for Machine Learning? I.e. books and courses. I know that Andrew Ng's course is a great resource, but I know that I'm not ready to start it yet. I'm actually way behind on the mathematical pre-requisites, so recommendations for that would be greatly appreciated as well. I've never taken a statistics course, and never received any formal education for mathematics past trig. I know that I'm looking at a good 6 months to a year just to get caught up on the math alone.


I'm sure others in this thread will have some good advice on the math front. You will want to be comfortable with statistics (as it seems you already are aware), but you will also want to be comfortable with linear algebra as well. Andrew Ng's course has a quick tutorial on linear algebra, you might also want to check codingthematrix.com. Khand Academy is a decent place for stats, probability, linear algebra, & calculus. I know there has been some criticism of K.A. in the past, but I think it's a good resource to get an intro level understanding of those topics.

As an intro to ML, I am a fan of Courseras ML specialization that is done by the University of Washington (https://www.coursera.org/specializations/machine-learning). It's free, except for the capstone, and the instructors do a good job of giving both theoretical & practical grounding in various aspects of ML.

I am sure others will have good suggestions as well. Good luck.


This Coursera specialization is almost polar opposite of Andrew Ng's one. It gives a very rudimentary explanation of a concept and then gets you to do a very basic practical exercise using their framework. The tests are simple enough that you can just replace $variable and pass it, but you'd hardly find it applicable with real world problem.

I've started with Andrew Ng course and found it way too dry and too much mathematical where Dato one seem too simple.

Tensor Flow course seems humorously hard as 15 minutes in you get "Please implement Softmax using Python". Ok, maybe later.


Well, if they gave you the formula for softmax, it shouldn't take more than a minute to implement it:

import numpy as np

def softmax(x): return np.exp(x)/np.sum(np.exp(x))

where x is an array of numbers.


The sad truth of the matter is that ML is more applied research at this point than a sensible set of programming problems.

From that standpoint, graduate mathematics is more useful for a practitioner than any robust programming experience.


ML involves math. That does not mean it's "applied research," though. The math is mostly at the undergraduate-college-level, and is mostly applied math - except for very theoretical ML/statistics which a practitioner can easily avoid. The math involved straddles an awkward boundary where most undergrad math majors find the math quite simple, but most CS majors would think it's too much math.


There are two major cases: academic, related to algorithm design and industry - related to deployment of already existing algorithms on various data sets.

For a CS engineer who wants to be able to use the latest Inception neural net from Google in his pipeline, there is actually almost zero math need. It's like any other API. In goes the image, out comes the label.

What she would need to know, as a good utilizer of ML, is just a bunch of concepts, such as training/test/validation, bias/variation, how to extract features from data and how to select a good algorithm and framework. So it's mostly data cleaning and tuning hyperparameters, the latter of which can be learned by trial and error and by talking to experts. The direct applications of math for such an engineer would be pretty slim to nonexistent.


That isn't "doing machine learning," for the same reason that web developers aren't "operating systems programmers" (even though they use operating systems and need to know some OS concepts).


So, what do you expect your ML engineer does all day? Most of them do in fact spend much time "applying".


Developing new or improved models.


Just curious, how far did you get into the UW specialization? The first course is certainly rudimentary. Also important to note, after the intro course you don't have to use Graph Lab Create and you can use pandas, numpy, scikit. I have seen people in the forums use R as well. I thought that the regression course & classification course were very thorough, although it does feel as though some of the programming exercises are "hand holdy". Overall, I think it is a solid specialization to get into ML, it's not meant for those experienced with ML or AI.


I actually think the hardest part about ML is the lingo. It's very alienating that even simple concepts seem to have their own lingo. A lot of the ideas are just what you as a developer might do intuitively if you had to implement it. But the language tends to be a bit mathy and obscure. So, when you try to read something without understanding the lingo, it seems impenetrable. But once you know things like "quantization is basically rounding" . . . it becomes easier.

Since ML comes from statistics, math, programming, but also other scientific fields, it can even have many terms for essentially the same thing.

For me, as a developer, it was actually easiest to just read some tutorials like the docs for scikit learn and then just start digging through the code of a bunch of libraries. How people name the classes tells you what they think things should be called. But the code tells you what it actually does. I just bounced back and forth between code, tutorials/blogs and books. After a few months, I can actually have a reasonable conversation with our ML people in the language they use and everything else I look at seems easier because I understand most of the terms.

I think asking how to learn ML is a lot like asking how to learn German. It might feel like you need to start with the grammar rules. But I think immersion is the best way. Get the vocabulary, then come back to the rules. I also find that having a burning question in my mind helps me with immersion. So, if you can find a project that drives you, maybe that will help.

So starting with the math fundamentals as a developer seems like an easy way to burn yourself out. But everyone does learn differently. If not, there wouldn't be so many ML algorithms, right? Right?


I agree. I started Andrew NG's coursera course and it seemed pretty maths heavy and dry. I started reading through the tensorflow tutorial and couple other more hands on approaches. I get a better idea of what is actually happening in the second.

Am I likely to need matrix multiplication if I start doing machine learning, or that the equivalent of writing a sort algorithm for a web dev - maybe useful to know the concepts, but in reality you won't actually use it?


My experience so far is that it's important to know WHY you use linear algebra. The idea to me is that almost all input data passed to a machine learning algorithm should be transformed to a multi-dimensional array of doubles.

It's easier to write algorithms against this.

Since a lot of ML libraries use native libraries for linear algebra, you might see a lot of implementations that are written in terms of linear algebra operations. So, if you're trying to read the code and you don't understand what the operations do, it may be hard to grok.

So, yeah, I think some understanding of linear algebra is necessary. Because it's sort of the atomic set of operations underlying most ML you'll see. To read the code, you need to be able to read the linear algebra. But you probably don't need to go read a book on linear algebra. I tried that and it pulled me away from what I wanted to know. It might be enough to just understand the numpy docs.


Six months ago, I would have said Kaggle, Juptyer, Python, figure things out. I've since discovered Microsoft's ML Studio. It allows you to start out with drag and drop (no code to learn) and, most importantly, you can visually see the output of your experiments. For example if you run a binary decision tree algorithm you can actually look at images of the 1000 trees it created and what the nodes from them is. Not important for practical functioning in the real world, but I like it a lot as a tool to learn.


Does this actually teach you much though? Or will you more likely end up toggling a bunch of things, seeing an output, and not having any better understanding of what led to the output or why a given approach works better?

Not that there isn't value in immediate results for building excitement and interest--I just want to have proper expectations before I check it out as I'm in a similar state to the parent in terms of where my math is and wanting to dive in.


When learning, I like to continually create a mental model for what will happen, and check if I'm correct. It's like doing problem sets in math, then plugging in the problem to MathLab to see the result.

I've never used this Microsoft product, but if lets you take educated guesses at what will work, and gives you some insights into the intermediate steps, then its useful as a check that your mental model of machine learning is becoming more coherent and useful.

Plus, if you slot in something and it gives a better output, you can go back to your studies with a new target of finding out why X param changed things.


The particular concern that sparked this with me is the concern of over-fitting to the data set. I don't know enough about ML to know how much of a risk that might be, but with a tool like this I wonder if that becomes obvious, or if you risk taking away false learnings just because you saw the output you hoped for, despite it being perhaps horribly overfit.

Again, that's just one example, and the instant visual feedback is awesome (I'm a visual learner, so that's huge). But at the end of the day, I know that there is a lot of math and code under the pretty graphics, and at some point I'll need to tackle that to make sure I am actually learning this and not just making assumptions based on what I can eyeball with some visualizations.


Learn to multiply matrices (you can probably Google this). Note that AB != BA in matrix math. Learn derivatives and how to do them with a lookup table. Learn what log() means (the inverse of some number to a power).

That's enough to implement and understand neural networks. You'll fumble around a lot more than you have to, but you can figure it out.

Honestly, you could probably fight your way through Ng's class with just matrix multiplication, which you can learn in less than an hour fairly easily.


Typing out text isn't hard (and honestly, if you're working with software it's preferable). GUIs give you two things to learn: the fancy editor, and the language.


Was the GP edited after you replied? Because the comment as it exists now is about how ML Studio makes it easy to learn through visualizations. The difficulty of typing has nothing to do with it.


You can work through the Coursera variant of Andew Ng's course without a deep math background: https://www.coursera.org/learn/machine-learning

More in-depth videos of the course are on YouTube: https://www.youtube.com/playlist?list=PLA89DCFA6ADACE599


Check out this (free) book http://ciml.info. I used it in my ML course (professor was the author), and remember it being one of the better textbooks I've read. Covers a variety of topics in a relatively easy-to-read and succinct manner, given the subject matter.

Not exactly light on math, so you may want to read up on some multivariate Calculus and Linear Algebra before the later chapters. First few sections should be approachable regardless.


I'm a big fan of this website in general, and they have a specific guide for 'everyone interested in machine learning'.

https://www.metacademy.org/roadmaps/cjrd/level-up-your-ml


I'm taking time off to study ML and keep an ongoing list of curriculum resources, as well as a blog of my day to day, here:

http://karlrosaen.com/ml/


Thanks for this! I can see that you and I are somewhat on the same page in terms of mindset, though you're far ahead of me when it comes to both dev experience and math.


This is great, thanks! Have you looked into the udacity ML nanodegree? I gave it a cursory look and it seems pretty decent.


You're welcome!

I looked a while ago and The Udacity nanodegree looks interesting but kind of a subset of the materials I'd already lined up. I also think part of the challenge is tailoring a curriculum to one's existing strengths, so in my case I'm spending less time on general programming / data munging, more on stats fundamentals and ML algorithms, and find that most all in one MOOCs have some material that is less worthwhile for me. Also: some of the projects they feature, like the kaggle competition https://www.kaggle.com/c/titanic can be undertaken independent of udacity.

I really think Python Machine Learning + https://www.kaggle.com/c/titanic + kaggle.com/c/forest-cover-type-prediction is a great place to start on the practical ML side.


There was HN thread about this: https://news.ycombinator.com/item?id=11859165

Below is my favorite response by vaibkv:

vaibkv 15 days ago

Here's a tentative plan- 1. Do fully AndrewNg's course from Coursera 2. Do a course called AnalyticsEdge by MIT folks from edx.org. I can't recommend this course highly enough. It's a gem. You will learn practical stuff like RoC curves, and what not. Note that for a few things you will need to google and read on your own as the course might just give you an overview. 3. Keep the book "Elements of Statistical Learning" by Trevor Hastie handy. You will need to refer this book a lot. 4. There is also a course that Professor Hastie runs but I don't know the link for it. I highly recommend it as it gives a very good grounding on things like GBM, which are used a lot in practical scenarios. 5. Pick up twitter/enron emails/product reviews datasets and do sentiment analysis on it. 6. Pick up a lot of documents on some topic and make a program for automatically producing a summary of those documents - first read some papers on it. 7. Don't do Kaggle. It's something you do when you have considerable expertise with ML/AI. 8. Pick up flights data and do prediction for flight delays. Use different algorithms, compare them. 9. Make a recommendation system to recommend books/music/movies (or all). 10. Make a Neural Network to predict moves in a tic-tac-toe game. These are a few things that can get you started. This is vast field but once you've done the above in earnest I think you have a good grounding. Pick a topic that interests you and write a paper on it - it's not such a big deal.


You should start with an intro calculus class (e.g. Calculus I). Andrew Ng's Coursera course teaches you the necessary linear algebra. After his Coursera course it'll be worthwhile to take a linear algebra class.


If the only math you know is up to trig, you're probably multiple years away from getting caught up on the math.

You need to first learn calculus and linear algebra, and learn them very well. I would also recommend having a good understanding of probability. Learning all of these well will take at least a year, if not longer. For instance, I took one year of calculus in high school and then one semester each of linear algebra and probability, which that adds up to two years.

You'll need calculus so you can do optimization (i.e. at the simplest level, take a derivative, set it to 0, and solve. Of course there's more you can do with calculus in Machine Learning). You'll need linear algebra for almost everything in Machine Learning. Lastly, probability will be useful for understanding very basic methods like Naive Bayes[0]. There are other methods built on probability also[1].

If you skimp on learning any of these, you will never be able to understand Machine Learning at a deep level, much less even a shallow level.

[0] https://en.wikipedia.org/wiki/Naive_Bayes_classifier

[1] https://en.wikipedia.org/wiki/Graphical_model


"Python Machine Learning" is a pretty good book. I also like "Natural Language Annotation" which is a bit specialized but there aren't all that many books on the annotation process.


And my anecdotal experience is that it's working extremely well. Take the Google Photos app that does automatic image recognition and tagging. The other day I was looking for a picture we took of our cat the first night we brought him home. I remembered we left him with a blanket in the bathroom but couldn't remember much else.

"kitten bathroom 2013"

And there was a picture of the cat sitting in the tub on a blanket. Simply amazing.


Strangely, half the time I try to use Google Now on my phone, it doesn't seem to understand basic queries that worked two years ago. And in the meanwhile, features and APIs that used to allow more reliable and explicit control (e.g. like in Picasa) are being shut down. I guess someone at Google figured that imitating Apple is worth sacrificing what remained of their power user appeal.


Just yesterday I was amazed I was unable to Google 'what is the smallest website possible' or 'what can you fit on 32kb website' or whether html demoscene exists at all. Or sometimes the results seem like complete spam, instead of showing me answers on xcode, it was showing some heavily seo-ized apple blogs.


Pretty sure "HTTP/1.0 200 OK" is the smallest possible website.


I have also noticed that the results are becoming more spammy, I thought I was losing it.


With or without ajax? /jk. But 32Ko is already a lot if there's no images or js libraries.


I guess someone at Google figured that imitating Apple is worth sacrificing what remained of their power user appeal.

"Sacrificing what remained of their power user appeal" is imitating Apple!


Google thinks half my pictures of cats are dogs


I seem to recall Google focusing the entire company on social/GooglePlus. Is this now saying the company is now being focused on machine learning in the same way?

Reminds me of the Ballmer/Gates strategy of everything must be Windows, which seemed flawed to me.


That's an interesting way to look at it.

I would argue that Google+ didn't work out because Google was trying to play catch-up in a field that it just lacked knowledge in (social networks).

Whereas with machine learning, they're not playing catch-up, everyone else is. Of all the other tech titans out there, they're the ones really leading the pack.

That remark aside though, I agree with you. An attempt to go hard on machine learning and apply it everywhere will probably work out pretty badly. As fascinating as ML is, I just haven't bothered to learn it yet because I haven't the slightest idea what new and novel problem I'd solve with it that doesn't have a better solution through a more straight-forward approach.


"An attempt to go hard on machine learning and apply it everywhere will probably work out pretty badly. I haven't the slightest idea what new and novel problem I'd solve with it that doesn't have a better solution through a more straight-forward approach."

Assuming they have the money, isn't this exactly the kind of reason Google should train up a wide spectrum of engineers from different teams and then see how they apply machine learning to their respective domains? It would be foolish for Google's management to think they can divine a priori all the best possible uses of ML in their various lines of business. Why not tool up a bunch of smart people, set them loose, and see what works?


In between, they focused the entire company on switching from Desktop to Mobile.


And at some point the back key on their search page stopped working and they branded this as a feature.


I was kind of surprised this article hooks with that relatively small "Ninja" workshop. My impression so far was that Google more or less created the whole machine Learning movement (out of necessity from their two core field, search and ads/analytics) and is employing several authorities of the field.

After Google Now, DeepDream and all the self driving car hype, reading about that workshop being the start of the big transformation seems strange.


> My impression so far was that Google more or less created the whole machine Learning movement

How did you get this impression? It has little basis in reality.


Good marketing, I presume.


In 2008 Peter Norvig was quoted saying there was very little or any machine learning in Search. They found it unreliable.


I thought 8 years is a lot and felt ML is just becoming mainstream.

Interestingly trends shows me a steady incline for 'machine learning', while searches for 'neural networks' are dropping since 2004

https://www.google.com/trends/explore#q="machine%20learning"...


To be fair Peter Norvig is much more "old AI" and shallow learning, which doesn't fit a lot of cases

Also 2008 in Deep Learning is 100 years ago :)


Peter Domingos? Really? Did they mean Pedro?

Sigh. Another instance of pop science getting most everything wrong (and I haven't even bothered to write anything about the technical content in the article).


Could you say more? What do you think are the technical inaccuracies?


A few I noted: Neural nets don't emulate the brain. NIPS is not an obscure conference, it's been the top ML conference for decades (sure, it's an obscure conference to laymen, but so is pretty much every science publication conference).


Agreed with this guy. Back when I started grad school (2012), NIPS was already so big they moved it to Vegas, but the casino venue didn't fly so well, so it moved to Montreal. NIPS was obscure maybe in the early 2000s, but definitely NOT since the last 5 6 years.


That's a good change from "social first" from a few years back. Google was never a social company to start with. Remember Orkut?

AI is google's leverage. It should explore on that path.


I find this article alarming.

Jeff Dean said, "The more people who think about solving problems in this way, the better we'll be". I sincerely hope that Sundar emphasizes the thoughtful application of ML and not allow black box algorithms take too central a role.

This kind of hubris swept through wall street banks during the structured products boom, ultimately leading to products such as synthetic collateralized debt obligations. Taking Jeff Dean's opinion about whether machine learning would be a good thing is like taking the opinion of the creator of synthetic CDOs whether they were a good thing. The authors and evangelists are blinded by optimism and opportunity.

Is Sundar Pichai swept away by the opportunities of machine learning and too biased to be aware of risks ? Is Sundar acting like Stan O'Neil did as he pulled all the stops at Merrill Lynch and went all-in with CDOs? I hope he isn't. It does not seem to be the case as he mentions thoughtful use of ML.

Nonethless, caution should be taken.


Bit of a self-plug here - LearnDataScience http://learnds.com has been well received as a starting point for newcomers. It's a set of Jupyter notebooks with a lot of hand holding. Git repo has data sets included so you can clone and go. All Python.


Not sure where it is going at all: evolutionary leaps often come from outliers and sometimes from serendipity. What about this reinforced confirmation bias?


On first blush, my sense is that a translation could go something like "we're prioritizing the analytics API over the results API." Not analytics in the webserver sense, but the OLAP/DW one. So, e.g. ad targeting fidelity over results presentation algorithms. Backend biz vs frontend.


This is a really great idea, especially when done right. The difficulty with machine learning and AI is understanding the pitfalls inherent in selecting data and training systems. You can fool yourself pretty easily into thinking you've got something that works when you really don't. That said it sounds like they're doing things well, I have no doubt this will have a positive impact in demystifying the "magic" of ML/AI and making all those Google products I use better!


"And then (this is hard for coders) trusting the systems to do the work."

Like you say, it can be easy to think that something works when it really doesn't. I hope that the above quote isn't meant to be interpreted as "believe the results are correct." Evaluation is paramount when working on these systems to avoid making such mistakes. I assume Google is including evaluation in their machine learning training, but it would have been nice to see that pointed out in the article for folks who may have an interest in machine learning but don't know what's important to focus on.


One big problem with ML is that it's highly based on your training set. There's been a few papers published in computational linguistics that discuss how poorly ML based sentiment analysis is if you try and apply the data to domains outside the training set. For instance, if you train the sentiment data on movie reviews (which is actually a data set commonly used for that purpose) and try and apply it to Twitter or the Web, the results are terrible. But, people keep on trying it.


I guess now we know who's responsible for asinine UI decisions lately (YouTube apps, Material wastespace design). /s


The article says that Mr. Giannandrea is no longer head of the machine learning division; out of curiosity, who has taken that position? It's not clear from the article.


He's still in charge of research -- he's just in charge of search now, too.


Maybe I talk nonsense, but the term "machine learning" could be detrimental to learning it, because it feels so machinesque ... It's a cool term, but also very vague and mystical, and from the antropomorphism it kinda implies the engineer is a teacher, or a translator. You're not even started, and you're already confused.

Surely it is better to talk of learning deep neural nets, and such things. Or maybe "machine training" would be less intimidating. But I guess we're stuck with it, and it's not so bad.


When will they move past the "Slogan First" magpie direction-switching?


Great article, but I can't help but CRINGE at the "ninja" references. I think that's already played out within the industry... and although pop-tech writers tend to lag a few years behind, it will sound extremely dated in the mainstream within a few years.


Agreed, and I've been waiting over a decade now for the demise of tiresome qualifiers like "ninja" and "on steroids" (we could all add a few, I'm sure). I was really, really tired of "uber", too, but now that one seems here to stay for quite a while longer. Oh, well...


I agree that these are all problems, but the one that bothers me most is the use of "x master race" such as PC master race. Considering the association with the Nazis and genocide, I keep hoping that it will finally die.


Why? What could be more insulting to Hitler than associating his 'glorious master race' with a bunch of gamer neckbeards living in their moms' basements? One of the greatest abilities we have to take the power away from something is to redefine it.


> “The tagline is, Do you want to be a machine learning ninja?”

I don't really like the word, but I don't really give a flop either.

I'm not sure how its better or worse than guru, rockstar, or any other lame word recruiters like to use to make us feel like the special snowflakes we are.

Which word would you like to see in place of 'ninja'?


I'd rather see all of those juvenile testosterone labels discarded in general.

Sheesh... "Do you want to make the world a better place?", with a photo of Gavin Belson holding an animal, would make me more inspired.


if it makes you feel better, i don't think you were supposed to be inspired. "ML Ninja" is just the name of the rotation program. if your team sends you, it's because they need someone to get the training, not because the program name makes it sound cool. i doubt the PM thought it would be public when she named it.


What are the juvenile "testosterone" labels?

The article leads with a low-testosterone star.


Consider the turtle ...


Because not all of us are ninja fans. I personally like the Shaolin masters more. And while we're at glorifying paid killers, why not go all the way and strive to be like the original Assasins (https://en.wikipedia.org/wiki/Assassins)? Granted, they were Muslim Shia killers that had a bad habit of also killing Crusaders (i.e. Christians), so that might not play out so well on a professional programmer's CV.


The word "ninja" in recruiting was almost dead before today.


Expert?


Well, if she's a black belt 2nd dan in a martial art, she's entitled.


If she's a 2nd dan then she'd be qualified at a high-level in taekwondo and wouldn't be entitled to call herself a ninja, who practice different martial disciplines.


That first paragraph almost made me stop reading.


Reading shit like this makes me wanna drop everything and start a Maths degree and get seriously into Machine Learning. Can you imagine being picked at work to study something AWESOME while being paid for it?!? She must be a genius.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: