Hacker News new | past | comments | ask | show | jobs | submit login
How AI and Machine Learning Work at Apple (backchannel.com)
328 points by firloop on Aug 24, 2016 | hide | past | favorite | 143 comments



None of this really answers the overlying question that Jerry Kaplan and Oren Etzioni raised. The question raised by most in the field isn't whether Apple use AI/ML internally, the real question is why they avoid the research community so strongly.

For me, the greatest thing about the ML/AI community is how open it is and how strong a sense of camaraderie there is between people across the entire field, regardless of whether they're from industry or academia.

Employees from competing companies will meet at a conference and actually discuss methods. Papers are released to disseminate new ideas and as a way of attracting top tier talent. Code is released as a way of pretraining students in the company's stack before they ever step through the company's doors. Papers are published on arXiv when the authors feel they're ready - entirely free to access - without waiting for a conference for their ideas to be spread.

This entire push of camaraderie has accelerated the speed at which research and implementation have progressed for AI/ML.

... but Apple are not part of that. They publish little and more broadly don't have a good track record. On acquiring FoundationDB, they nixed it, with little respect to the existing customers. Fascinating pieces of technology lost. If they aren't using the exact thing internally, why not open source it? I fear the same is likely to happen to Turi, especially sad given the number of customers they had and the previous contributions that many of Turi's researchers made to the community via their published papers.

Apple may change in the future - they may become part of the community - but a vague article of self congratulation isn't going to sway me either direction.

"We have the biggest and baddest GPU farm cranking all the time" ... Really? ¯\_(ツ)_/¯


What makes you think this is self-congratulation? This is a response to Marco Arment, Ben Thompson, and other pundits (largely spurred by the PR efforts of Google and to a lesser extent Facebook and Microsoft) who claim that Apple is "behind" in this field or that it somehow poses an existential threat to Apple. This is Apple saying "uh no".

They'd be perfectly fine with not talking about it as they've done until now if they weren't trying to counter these media narratives.

As to your point about Apple not engaging with the research community, how is this a surprise at all? Their M.O has been secrecy ever since Jobs returned to Apple.


This article lacks any real evidence they're ahead though, which is essentially equivalent to "behind" if (a) the majority of the advances are democratized by your competitors and (b) you lack the ability to attract top tier talent. If AI/ML is an existential threat to Apple, then this article doesn't provide any comfort beyond being a PR puff piece.

The talent war for AI/ML is truly insane. Given that the top researchers and engineers can get similar benefits at other entities, whilst also continuing to publish code and research out in the open, Apple aren't that attractive.

They may try compensating for this via acquisitions but that still leaves a fundamental issue when it comes to long term retention.

As I mentioned before, this is a fairly universal trait. Many come from academia or have open source affiliations / rely on open source tools for their skill set, so the idea of putting their ideas out there to be used, ...


If what AI/ML talent wants is to do research and draw a Silicon Valley salary then you're right, they're not going to work at Apple.

Some people don't want to work at Apple because they can't describe their job on their LinkedIn pages or tell their friends and family about what they do. Others don't like that they have to buy their own meals at Caffe Macs. I don't think Apple worries about this too much.

What they do worry about is finding people that fit Apple's culture and want to work on products used by hundreds of millions of people.

In any case, being perceived or not to be at the top of the field and being able/unable to hire the best ML/AI talent isn't an existential threat. ML/AI isn't magic, it's a technology. If there's one thing that the tech industry should have learned from Steve Jobs, it's that you make good products by working backward to the technology. In that respect, what Federighi mentioned about not having a ML/AI division was reassuring.


> If what AI/ML talent wants is to do research and draw a Silicon Valley salary then you're right, they're not going to work at Apple.

Let me simplify that further: "If what AI/ML talent wants is to publish...then you're right, they're not going to work at Apple". Money may or may not be a factor, but Apples secrecy won't allow the talent to publish their research. Given that AI is highly driven by research right now (no "if" about it), what talent would want to work somewhere where they can't publish research papers?


"what talent would want to work somewhere where they can't publish research papers?"

There are lot of people out there who don't particularly merit the publish-or-perish ethos of academia and would rather publish only after they have something worthwhile to publish and not just footnotes to an established scheme.

E.g. https://www.theguardian.com/commentisfree/2014/feb/14/higgs-...

and so on.


> Given that AI is highly driven by research right now (no "if" about it), what talent would want to work somewhere where they can't publish research papers?

Ehh. I would question that doing research is necessarily highly correlated with wanting to publish papers all the time. A lot of the work during product development is a mix of AI, UX and engineering. The entire output might definitely fall in research but the individual components of it might require way more fleshing out to bring each to publishable quality. Considering that we don't particularly have the publish or perish doctrine of academia, we are not particularly forced to publish at the same rate as there.


I agree with your implicit assumption -- that the talent and PR battle is motivating this article -- but disagree with your assertion that secrecy makes Apple unattractive to ML talent.

You're weighting open publishing very heavily, but others may have different weighting functions. The ability to interact with device makers may be convincing to some, or maybe others like the Apple company and culture, or Apple's privacy stance.

Criticizing the article as PR or boasting is true, but somewhat beside the point, because most of the players in this field have always -- for decades -- had a tendency to PR and boasting. Admittedly, proofs-of-concept like AlphaGo do provide substance behind the boasting, but they have a definite PR element too.


Funny, how it's a "response" from Apple and yet a PR effort from their competitors. FYI, it's a PR effort.


"This is a response to Marco Arment, Ben Thompson, and other pundits (largely spurred by the PR efforts of Google and to a lesser extent Facebook and Microsoft)"

It may not be self-congratulation, but this sentence is definitely unsubstantiated slander.


Some companies engage academia, many others do not. I find it very surprising how willing companies like Microsoft and Google are to let their best and brightest invest in publishing. I get the long term benefit, but in most companies secrecy is the default.


To me it's much more surprising that Apple and other companies refuse to allow publishing. Anyone in AI who wants to continue to be recognized in and remain friends with others in the field and advance their career should absolutely refuse to go anywhere like Apple, unless they plan to stay at that one company their entire life. It's amazing how frustrating it is trying to talk to Apple employees at conferences -- at this point I pretty much avoid it if possible.


A friend of mine work in medical drugs research. Secrecy is definitely the norm in this field and they even prefer to keep formula in a safe than publish a patent, because they think patent system is not protective enough to get they R&D money back.

Now that all come down to the same old paradox. The current patent system encourage patent troll and discourage real innovators to publish because they are too easily copied.

One alternative is open source, but when huge R&D costs are involved it's maybe not the best idea. I think Apple is currently doing a great mix, they work on some project with OpenSource (WebKit, Swift) were community feedback is important... and they keep innovative R&D technics under secrecy to monetize them.


[voluntarily removed]


His blog post was the direct result of Google I/O.


> The question raised by most in the field isn't whether Apple use AI/ML internally, the real question is why they avoid the research community so strongly.

I'm not sure why this is a "big deal" or even much of anything. Apple has always done this. Even their work on WebKit they were not always "there" with the community. This is how Apple has always worked, why would it be any different with AI / ML?

> For me, the greatest thing about the ML/AI community is how open it is and how strong a sense of camaraderie there is between people across the entire field, regardless of whether they're from industry or academia.

You get this in every field but there are also a ton of players out there not going. Just like Apple there are plenty of companies hard at work on various AI initiatives who are also not going to said events.

I think you're making a mountain out of a molehill here. Does anyone really care Apple doesn't give away free time / knowledge to people?


> Just like Apple there are plenty of companies hard at work on various AI initiatives who are also not going to said events.

So what are these companies doing? Linear regression in excel? I don't see any of them beating Go players or building self-driving cars.

> Does anyone really care Apple doesn't give away free time / knowledge to people?

End users, no. Engineers and researchers, yes. Who builds the products the end users love? Engineers and researchers.


>I don't see any of them beating Go players or building self-driving cars.

Because they aren't telling the world. On a lesser scale, CMU was building self driving cars for years before Google decided to step up the PR and tell everyone self driving cars were possible.


There's a big difference between driving in a controlled environment or track versus driving on the street in numbers, where it can pose real danger to people.

Google's self driving cars in one of the most advanced use of technology I've seen. It's borderline magic in our time.

Edit: removed "give credit where it's due". You weren't withholding that, apologies.


> So what are these companies doing?

Uh, well, what are the use cases for AI / ML? That's what they're doing. They're using it all over the place. It's almost unheard of to find a small tech company not at least exploring ML to see if they can use it to gain an edge.

Why do they have to detail what they're doing so they're not automatically bucketed into the "doing linear regression in excel" camp?

> Engineers and researchers, yes. Who builds the products the end users love? Engineers and researchers.

Funny, these same practices at Apple for the past 40 years haven't turned engineers and researchers off from working there. It isn't turning them off now, either. Apple has never had to give away time and knowledge for free to attract talent. In fact most companies don't need to do this. Relatively speaking only a few companies do this.

Seems a bit entitled, in my opinion, to think technology companies have to give away their time and expertise. They're going to attract talent if they're interesting / create interesting things as a company.


>the real question is why they avoid the research community so strongly.

Along the same vein, does Apple even employ any 'brand-name' researchers? From what I've seen, all they do is take research invented somewhere else and just apply it. I think Alex Acero is the biggest recognizable researcher that Apple employs.

So to answer your question, they don't publish and don't attend conferences because they simply don't have many researchers on board.


"real question is why they avoid the research community so strongly."

I think it's within their DNA.

They are a very secretive company for a lot of historical reasons.

Company culture is what companies are ... they are the social rules built into the organization that are very difficult to change once established.

Apple is within their moral right to do what they are doing. There are different approaches taken by other companies so it's healthy to have that.

If this were weapons systems or cancer cures, I might have a different opinion.


>>They are a very secretive company for a lot of historical reasons.

It's true, they saw what happened to Xerox PARC's ground-breaking UI work...


Every companies copy other companies innovations.

Smartphones would not exist unless BlackBerry created the market for them.

But they are secretive not for that reason - they are secretive because they want to announce and present things on their own terms. They don't want leakage pre-product.


I like how Apple can't win here.

If they publish, they aren't doing anything new and they haven't innovated since Steve died, and they should really just give up because there's obviously no point to anything they do and hasn't been since 1997.

If they don't publish, they're evil secretive bastards who don't contribute to the ML community and probably drown puppies or something because who knows what goes on behind closed doors?

I don't really have a dog in this fight, except inasmuch as I'm a generally satisfied iPhone owner. I just think it would be really neat if people would settle on one narrative or the other, instead of keeping on with both at once.


If they were serious, they'd provide objective evidence for their claims. Papers, source code, numbers. Not "we have the biggest, baddest GPU farm" and refuse to say any more as policy. Under this veil of secrecy and doubletalk, all they have is marketing: Trust the brand rather than objective reality.

Many other companies are mature enough to show their cards but Apple keeps declaring victory. Apple writes its own narrative instead of participating.

There are mature ways for this to shake out and they don't include your strawman dichotomy.


> If they were serious, they'd provide objective evidence for their claims. Papers, source code, numbers. Not "we have the biggest, baddest GPU farm" and refuse to say any more as policy.

But why? Who cares about papers, source code or numbers beyond the tiny segment that HN caters to? Apple has always been about the user experience. When they discuss a vast majority of prior accomplishments they discuss them, many times, in terms of the layman. Does that mean they can't manufacture? Nope. AI is simply harder to show without the papers / source code you mention but this isn't typical Apple to release that.

I honestly don't think they care to go beyond the level of details outlined here. I interviewed with the Siri team twice now and they certainly have some incredibly smart people. Whether they're "winning" against Google, Microsoft or whoever? I say who cares. They want to control their narrative without divulging too much detail like they have always done.


They can put out as many PR pieces as they want, but if they want to be judged on their public accomplishments, they're losing. If we're strictly judging UX, in ML areas, they're behind. Google's speech recognition is superior, Google's automated photo organization is superior, and Google Now is superior to Siri.


> But why? Who cares about papers, source code or numbers beyond the tiny segment that HN caters to?

I don't know, say, the thousands of future engineers and researchers, that, you know, make the product?


N=1, but as an Engineer I don't care about publishing studies, research and whatnot. I care about creating useful/extremely easy to use products that solve real pain points. The lack of AI/ML papers or research posted by Apple doesn't make me think less of applying for them upon finishing my graduate work. They're a great company to work for, and are most assuredly doing some massively cool stuff with their AI/ML development.


> I don't know, say, the thousands of future engineers and researchers, that, you know, make the product?

Apple hasn't needed to do that with engineers and researchers pretty much ever; now with AI it's different? Maybe it won't help attract some small portion of people but I'm not sure why it supposedly matters now where it hasn't in the past.


Yes. That tiny segment.


In my mind, this article is squarely aimed at that segment. They are trying to tell one of the most competitive labor markets in the world, AI developers, that they're hiring, but more importantly, that they're doing interesting things.


+1 Apple doesn't have a great relationship with open source community and that should not discredit them of innovating and trying the use AI, ML, NN in the products. I think even by being closed source, they still are pushing the envelope by inspiring us and their competetion.


>> "If they were serious, they'd provide objective evidence for their claims."

Why do they need to? At the end of the day all that matters is that the products they ship are good and aren't lacking compared to competitors. The only reason they would provide evidence for what they claim other than that is to appease people calling bullshit and unless it's effecting sales no company should stoop to that.


A lot of their products are lacking though. Maps is awful. So is Safari.


Safari is my browser of choice and I have no issues with it (I much prefer it over Chrome). Maps has occasional issues but it's a solid product now. I use it for walking and transit directions and can't remember the last time I had to fall back on Google Maps.


Same here. Maps works when I'm three quarters blasted, trying to get to a train station I have no idea how to find from a bar I've never been to before, and that has to count as some kind of win.


Just the other week I realized that I hadn't launched Google Maps in something like 2 years so I finally deleted it.


It's taken me way out of the way, and through a toll bridge when that was not even close to the best way to go. Safari is bad because it's behind the times, and doesn't keep up with standards. Makes it really annoying to have to wait to use certain features because IE and Safari can't keep up.


> Safari is bad because it's behind the times, and doesn't keep up with standards

Safari and WebKit has 100% support for ES6, which is more than any other browser - http://kangax.github.io/compat-table/es6/

I think what you mean is Safari doesn't support the standards you want it to support, some of which aren't actually full standards yet but candidates.


hahahaha. Such bullshit. You're talking about Safari 10 which is in beta & no one actually uses. Look at 8 & 9. Much worse than any browser not named IE 11, and 8 is only marginally better. Seriously, i'm glad they're coming out with an up to date version, but it's been far too long in coming.


Down voted but no rebuttal. typical.


You blame Safari for not being Chrome? I mean, OK, I guess, but there are a lot of browsers that aren't Chrome. Why single out Safari for vitriol?

(And I'm not even going to touch the frankly invidious IE comparison.)


Chrome and Firefox and Opera all keep up. Safari is worse than IE/Edge at this point in terms of implementing standards. It's the absolute worst because it's not even available for anything but Apple products now so it's a pain in the ass to test.


IE is dead. Edge is embryonic. I can't speak for Opera. I use Firefox every day, both as my personal browser and as my primary development target, with validation in other browsers performed via automated and manual testing. And, if Chrome is your standard, Firefox cannot honestly be said to "keep up". It's consistently a few versions behind, at least, in supporting major innovations which reliably appear first in Chrome.

I grant testing is an issue if you don't own Apple hardware, although there are free and inexpensive remote testing services which cover that need well, and I'm certainly not about to say that Safari doesn't pose any unique difficulties to the web developer. But so does every other browser.


I'm a firefox user as well, and it's both my personal browser and development target. Safari is the only browser that consistently gives me issues.


It's generally helpful if you don't trot out 2-3 year old claims. Both maps and Safari have had massive changes recently and both are if not on par with other companies it's really close. Apple has had issues with OS X reliability and music can be truly bad at times and iCloud (especially sync) is almost embarrassing. I'm a huge Apple fan and an iOS developer but they aren't without fault but if you are going to call them out make it current.


Curious about your take on iCloud sync. Is that from a developer perspective (I remember the API initially sucked) or user perspective? I've never implemented iCloud sync in an app but at least with Apple's apps it works perfectly for me. I moved thousands of notes from Evernote to Notes which I use daily, I use calendars/contacts, I use Reading List, the credit card and password sync for Safari works really well. In the past there were definitely problems but I haven't come across any in quite a long time.


Mostly from a developer side but a little from a user stand point. Parts of iCloud are pretty great, especially cloudkit, but sync itself is really hard to manage and have it work correctly. I've found it fairly common to have a decent amount the production support issues related to iCloud. That being said it's getting better IMO and I think once iOS 10 and Sierra are the default OS's it will be less of an issue.

Common problems include items not syncing, syncing out of time order, and conflict resolution causing synced data to appear lost. Most devs that are really serious about sync tend to roll their own after experiencing enough issues to make iCloud not worth the effort.

This is entirely based on my experience and may not be universally true but I've heard enough devs repeat my own complaints to feel like it's not completely out for left field.


Apart from traffic data, where Google is slightly better, maps has been fine for me on ios9 and is better on 10. Apple even learned from the errors they made with the initial launch and have these big open betas for a reason.

Safari is perfect for my use cases. Way better battery life, feels lighter than chrome, etc, etc.

What exactly in your opinion is worse about modern safari and maps that makes a worse experience for the user?


That's a fairly strange reason to question their machine learning claims


"If they were serious, they'd provide objective evidence for their claims."

I disagree - they are not making any claims that they care to be validated. They don't care to be 'known as #1' in whatever field. We can believe what we want as far as they are concerned.

'If they are serious' - they will make this technology work to make better experiences for end consumers.

'The proof is in the pudding' - so to speak.

I for one don't doubt that their AI 'increased the intelligence of Siri'. But I also don't care about Siri and find it basically useless. AI has different kinds of impacts in different domains. If they make better experiences for us - all the power to them. If not, then not ...


> I like how Apple can't win here.

They have themselves to blame, at least partially since they painted themselves into a corner. A while back they went on a media offensive proclaiming how "data-mining" the user's information is bad, and they won't stand for it in the interest of privacy... As it turns out, reality is a little more nuanced, and a bit of applied AI is in fact good for the user experience. e.g. your phone can't tell you were you parked your car unless it knows your location, and when you stopped driving.

I see this article as Apple PR rolling back on the previous position which inadvertently made Apple look like a company being overtaken by recent technological developments.


I'm not so sure you actually read the article.

It explicitly talks about how they've accomplished a lot of this without violating their users' privacy. And that the fact that they deny AI researchers big hoards of data, hurts their reputation in the AI community, because most AI researchers want mountains of data.


> It explicitly talks about how they've accomplished a lot of this without violating their users' privacy.

My question are: A. who is violating user's privacy, B. How is Apple any different[1] from them, besides proclaiming they aren't violating privacy?

1. Some data is collected from the device and sent to a server for processing. Until Apple does something truly radical (like 100% encrypted information that is processing on-device), they are just like every other company that opens up user's data to 'ex-filtration' by third parties.


Well, to your point B: for a start, I've already paid Apple. They have much less financial incentive to use my data in ways inimical to me than, say, an advertising company would.


They're certainly not the only ones in this space, but they've been talking about differential privacy recently[1]. So there are possibilities between "everything is on our server" and "everything is on your device".

[1]: http://www.recode.net/2016/6/24/11967188/apple-data-collecti...


> Until Apple does something truly radical (like 100% encrypted information that is processing on-device)

This is exactly what Apple does, which should answer question B.


> This is exactly what Apple does

That is not true, even the puff-piece article admits as much if you read it carefully. This can be proven trivially: is Siri processed locally?


Answering only the trivially erroneous objection, and ignoring two other better-founded ones, does not reflect confidence in your stance.


In the footnote to the point gp was replying, I had said

> Until Apple does something truly radical (like 100% encrypted information that is processing on-device)

You can call it trivial, I will argue otherwise. No one knows outside of Apple how big a percentage cloud-based processing is, but my point remains - Apple's does what everyone else is doing, the only difference is proportion.


> I just think it would be really neat if people would settle on one narrative or the other

Let's start with Apple's narrative. Throughout the piece, there is no direct quote mentioning any Apple innovation. The quotes are actually fairly weak:

“We’ve been seeing over the last five years a growth of this inside Apple”

“I loved [publishing at Microsoft Research] and published many papers. But when Siri came out I said this is a chance to make these deep neural networks all a reality”

“Speech is an excellent example where we applied stuff available externally to get it off the ground.”

What I get from it is that Apple has started following Google's, Microsoft's and Facebook's lead in the past five years, and has a few knowledgeable employees reusing and combining algorithms published in papers from other companies or academia.

But fundamentally, the reason they perform this PR is to start being viewed as knowledgeable in the field, so that when they release self-driving cars in 2017 or 2018, the public trusts them.

And to let us know that they're hiring: “Though today we certainly hire many machine learning people, we also look for people with the right core aptitudes and talents.” The hint is not subtle.


> if people

Your mistake here is thinking "people" are one coherent group when in fact what you're describing here is (probably) two different ends of a polarised discussion! You're probably in a more moderate central position (generally satisfied iphone owner). It may or may not be a good thing that apple is so controversial that they provoke such vigorous discussion (my personal take is that it is), but that not everybody is satisfied with "good enough" isn't a reason to silence discussion. Particularly when as a brand they're pitched as high end rather than just good enough.


Vigorous discussion isn't an unqualified good. If it goes somewhere, sure. But discussions on HN of Apple's business activities seem rarely at best to do that. I don't understand why the light:heat ratio in this kind of thread is so low, and usually I keep my mouth shut about it, but today the urge to say something overmastered my self-discipline on the matter. I don't pretend that my comments here constitute any more of a contribution to worthwhile discourse than anyone else's do.


I'd venture that it's probably because Apple don't encourage discussion. As demonstrated in this article and pointed out by other commenters there's a large veil of secrecy that comes down when it comes to detailing specifics, which is all well and good when things are working, but when it isn't what you're faced with is marketing bluster and iron fisted PR. I'll reiterate once more, that for a company that continually touts itself as "top end" just isn't good enough.


People aren't just one person, they have diverging opinions. You can't expect everyone to settle on one narrative. Plus, you know, trolls.

It basically happens to every company, nothing to be sad about.


Please get over it already. Steve jobs was just one man in the company of thousands, sure he was their leader but someone else will step up for him and I'm sure the company has many brilliant minds within who are still trying to innovate. Let's give Apple some slack and stop crying about Steve Jobs.


There is only one way to win and that is to publish. Innovation in this field doesn't happen in a vacuum but by multiple simpler components combine in ever more emerging complexity.


Apple can free ride on other people publishing and synthesise in house. The field might lose out (but there are always free riders) but does Apple?


> The field might lose out (but there are always free riders) but does Apple?

Yes - those in the field who prefer to publish will not be willing to join Apple. How much of an opportunity cost this is to Apple remains an open question.


In the article, Apple spun that as an advantage.

> “Our practices tend to reinforce a natural selection bias — those who are interested in working as a team to deliver a great product versus those whose primary motivation is publishing,” says Federighi.

And I think they may be right. Researchers often aren't the best product creators.


> Researchers often aren't the best product creators.

You NEED researchers to build a robust autonomous vehicle or speech recognition system. Hell, you know how many electrical and materials scientists they have working on the iPhone?

Physical products actually requires science and research. Designing a responsive landing page does not.


You need people who understand the science but whose primary focus is developing the product, not developing new theory.

Edison and Ford vs Einstein and Faraday.


> “Our practices tend to reinforce a natural selection bias — those who are interested in working as a team to deliver a great product versus those whose primary motivation is publishing,”

This is a blatantly false dichotomy: there's nothing about publishing that precludes anyone from "working as a team to deliver a great product"

> Researchers often aren't the best product creators.

Tell that to Andrew Ng!


Andrew Ng is a great example.

He's made enormous contributions to human knowledge, but doesn't seem to be interested in the hard work of bringing a product to market.

He even left Coursera to return to theory.


> He's made enormous contributions to human knowledge, but doesn't seem to be interested in the hard work of bringing a product to market.

what exactly do you define as "the hard work of bringing a product to market"? To me, his work at Google certainly applies as bringing a product to market - the cat recognition deeplearning AI is a product.


To me, recognizing cats isn't a very useful product; that's just a demonstration of the technology.

That led to an interesting paper and a lot of potential applications.

By "the hard work of bringing a product to market", I meant the process of perfecting the product for actual applications.


"recognizing cats" isn't a product at all - the AI behind it is the product, albeit barebones. I've heard folk on HN call such a thing an "MVP".

> By "the hard work of bringing a product to market", I meant the process of perfecting the product for actual applications

I'll have to disagree - "perfecting the product" is one of many roles required to bring a product to market. Other roles that are just as important are "dreamer/visionary", "practical starter/founder", "scale-up person". Sometimes one person can assume multiple roles, but rarely all roles, all of them are hard work.


I'm not sure what you mean by "the AI behind it". A particular model, the theory, the hardware, or something else?


Perhaps they seek to win a game other than the one you have in mind.


I don't think there are any other game to win for a company like Apple.


Is it possible that this thought does not accurately reflect the reality of the matter?


Sure but I doubt it.

Any company who want to win in the long run will have to take control of machine learning as a central part of their culture.


Which would appear to be what this article describes them doing.


I realize this is going to get voted down, but piece reads like a Trumpism "we have great AI, trust me, you would not believe, if you could see what we're doing".

I don't think you get to claim PR credit for advances in ML unless you publish. In general, for R&D, IMHO, you need to publish. There's product R&D, and there's fundamental R&D. If you make an advancement in something fundamental, but that helps your product, then publish it. If it is specific to your product only and can't be transferred elsewhere, then maybe it's ok to keep it secret.

Apple and Google's competitive advantage now arises from scale and path dependency. I think they need to let go of this idea that somehow they derive a competitive advantage by keeping these things secret. The Open AI community is going to advance at an accelerated rate regardless and IMHO, it's better to be part of it than to be seen as a kind of parasite that consumes public R&D, but doesn't give back improvements.


> If it is specific to your product only and can't be transferred elsewhere, then maybe it's ok to keep it secret.

wouldn't it be the other way around? if the competitors can benefit from your knowledge, you'd want to keep it secret.


No, if everyone can benefit, that, IMHO, is precisely why it should be contributed back to the community. If you want a selfish altruistic reason to do it, well the likely improvements that the external community will make will benefit you, including those contributions made by your competitors.

I think a culture of secrecy yields local optima. Only if you believe (and is true) your company has unique geniuses that can't benefit from other people reviewing their science will secrecy benefit you.

IMHO, only research that is useless to your competitors is research to keep secret in the sense that it is too specific to your own proprietary dependencies.


Finding these heavily curated advertorials Apple has been pushing out (This and the recent wired advertorials come to mind) a bit of a sign that not everything is sunny at One Infinite Loop.


Probably no more so than Google. Steven Levy is basically the go-to when you want this sort of stuff. For example, https://backchannel.com/how-google-is-remaking-itself-as-a-m...


IMO the most profound quote in the interview: “Our practices tend to reinforce a natural selection bias — those who are interested in working as a team to deliver a great product versus those whose primary motivation is publishing,” says Federighi.


Nice article, but in my perception, Apple is way behind in AI vs. Google on mobile. Siri's current speech recognition is still far behind Google on my iPhone. Half of the time it Siri doesn't recognize something but Google recognizes it right way. Apple Maps continues to lag in the following features: biking and public transit and even things like which lane to take on a big interchange. As a result I end up using Google (Now) and Google Maps by default.


Never mind biking. For example in Switzerland Apple Maps doesn't even seem to take traffic information into account when it gives you ETAs. It shows heavy traffic as red spots, but it doesn't know the delay. Google meanwhile gets all of this perfectly.


Why should Apple researchers contribute back to the AI community on their shareholders' dime? What makes AI special as opposed to any other field of computer science?


The long term goal is loftier than almost all others except for extending human life and populating other star systems.


That's an opinion. I don't think you could get a random group of 3 AI researchers to agree on what the long term goal is, or if there even is a long term goal as opposed to this simply being a very buzzword- and marketing-friendly area of research at the moment.


> If you’re an iPhone user, you’ve come across Apple’s AI, and not just in Siri’s improved acumen in figuring out what you ask of her. You see it when the phone identifies a caller who isn’t in your contact list (but did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar. Or when a map location pops up for the hotel you’ve reserved, before you type it in. Or when the phone points you to where you parked your car, even though you never asked it to. These are all techniques either made possible or greatly enhanced by Apple’s adoption of deep learning and neural nets.

This whole paragraph must be a joke. Google started doing this since way too long and they don't even publish these as their best features.


I've never had a single one these happen to me. Has anyone actually seen these behaviors out in the wild? Is it because I use gmail, chrome, google maps, and shut off most of the siri/recommended apps/etc functions in favor of serious battery life gains?

>You see it when the phone identifies a caller who isn’t in your contact list (but did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar. Or when a map location pops up for the hotel you’ve reserved, before you type it in. Or when the phone points you to where you parked your car, even though you never asked it to


Nope, but it often tells me how long it will take to drive to work (I never drive to work), even when I'm on the other side of the country, on holiday, and has been for two weeks.


So glad I'm not the only one. How do you turn this crap off?


It will take 524 minutes to drive to work. Traffic is normal.


> shut off most of the siri/recommended apps/etc functions in favor of serious battery life gains?

That won't get you any battery life. Just use low power mode and turn down the backlight if you need more.


i am really disappointed by apple. I respect Apples wish to develop products in secrecy and i understand that you can't just open source your secret sauce. I also really like their products.

But not publishing your advancements harms the community greatly. Its like building your product entirely with open-source software (the published work of other researchers) and not contributing back.


That's what they did with osx.


OSX is much more than the kernel. Darwin is open source, even. What are you talking about?


The truth of that statement is dwindling rapidly with time, however. XPC is closed-source, and with launchd being subsumed into it as it has become more important to macOS's system, Darwin hasn't had a working init system for years.

It's been effectively TiVo'ed, or turned into shared-source-ish instead of open enough to run it as a real OS.


How much of Apple's investment in OS X was in Darwin, and how much was in proprietary technology?

Apple built a proprietary product using open source technology, and are now building proprietary products using open ML research.

But it seems they're doing so for their own profit, not to benefit the open source and research communities.


>Apple built a proprietary product using open source technology

You say it as its somehow contradictory. Open Source licences allow (LGPL etc), and some even welcome (BSD, MIT, Apache) creating proprietary software based on open source technologies.

Besides, Apple also did extend open source technologies (as open source) a hell of a lot. Webkit past Apple is 100x bigger/fancier/better than the puny KHTML it started from. Tons of LLVM work, especially all the early stuff, has been sponsored by Apple. Swift was made Open Source just recently...


Apple's core "product" is the user experience. Darwin is open source, Carbon and Cocoa are not.

I think that philosophy will continue as they use ML tools. They might share the ML equivalent of plumbing like Webkit/LLVM/Swift, but probably not improvements to the user experience like Siri's brain.


>Apple's core "product" is the user experience. Darwin is open source, Carbon and Cocoa are not.

So? Why should they open source and commoditize their core product?

Besides, I don't know any company that did it and got much out of it, except for some gratitude from OSS fans.


I'm not saying they should, I'm saying they wouldn't (and probably shouldn't).


That's what the top level comment said, you are only adding noise.


I added a question.

My impression is that Apple invested vastly more in the closed parts of OS X than the open parts.


a rhetorical question, i thought


well, i don't think so.

Here is the kernel: http://opensource.apple.com/source/xnu/xnu-3248.60.10/ What's above it is not rocket science and the argument could be made that it would hurt apple open-sourcing it. (Whether it's true or not is irrelevant, it's a valid argument). But this research is not.


This is a free country and capitalist society so Apple has every right to do what they want to do. I don't object to it. What you're saying to Socialist and it has it's plus and minuses.


Apple sends their employees to ML/AI conferences with fake company names on their badges to avoid leaking a single bit of their knowledge. I don't know how any AI researcher resists working at Apple!


So - I don't need a source on this to grill you and score internet rhetoric points; I need a source to share this juicy factoid with skeptical friends.


I have a big problem with articles like this.

Apple's PR is notorious for cracking the whip, which means that the "inside story", if they give it to you, comes with a warning to the journalist to behave and be nice. Levy's piece is generous with flattery and cautious with criticism. He quotes Kaplan and Etzioni high and briefly in the piece, and spends the rest of it refuting them. Apple will give him another inside story down the road.

Apple has a big question to resolve for itself about the tools it's going to use to develop this. It can't go with Tensorflow, because TF is from Google. It's kind of at another turning point, like the one in the early 90s when it needed it's own operating system and Jobs convinced them to buy next and use what would become OSX.[0]

The most pointed question to ask is: What are they doing that's new? The use cases in the Levy story are neat, and I'm sure Apple is executing well, but they don't take my breath away. None of those applications make me think Apple is actually on the cutting edge. There's no mention of reinforcement learning, for example; there is no AlphaGo moment so far where the discipline leaps 10 years ahead. And the deeper question is: Is Apple's AI campaign impelled by the same vision that clearly drives Demis Hassabis and Larry Page?

We see what's new at Google by reading DeepMind and Google Brain papers. Everyone else is letting their AI people publish, which is a huge recruiting draw and leads to stronger teams. Who, among the top researchers, has joined Apple? Did they do it secretly? (This is plausible, and if someone knows the answer, please say...) The Turi team is strong, yes, but can they match DeepMind? If Apple hasn't built that team yet, what are they doing to change their approach?

Another key distinction between Apple and Google, which Levy points out, is their approach to data. Google crowdsources the gathering of data and sells it to advertisers; Apple is so strict about privacy that it doesn't even let itself see your data, let alone anyone else. I support Apple's stance, but I worry that this will have repercussions on the size and accuracy of the models it is able to build.

> “We keep some of the most sensitive things where the ML is occurring entirely local to the device,” Federighi says.

Apple says it's keeping the important data, and therefore the processing of that data, on the phone. Great, but you need many GPUs to train a large model in a reasonable amount of time, and you simply can't do that on a phone. Not yet. It's done in the cloud and on proprietary racks. So when he says they're keeping it on the phone, does he mean that some other encrypted form of it is shared on the cloud using differential privacy? Curious...

> "How big is this brain, the dynamic cache that enables machine learning on the iPhone? Somewhat to my surprise when I asked Apple, it provided the information: about 200 megabytes.."

Google's building models with billions of parameters that require much more than 200MB, and that are really, really good at scoring data. I have to believe either that a) Apple is not telling us everything, or b) they haven't figured out a way to bring their customers the most powerful AI yet. (And the answer could very well be c) that I don't understand what's going on...)

[0] If they have a JVM stack, they should consider ours: http://deeplearning4j.org/


>> "Apple is so strict about privacy that it doesn't even let itself see your data, let alone anyone else. I support Apple's stance, but I worry that this will have repercussions on the size and accuracy of the models it is able to build."

I don't think this is something we should worry about. If you want better models use Google. If you want better privacy go with Apple. It's fantastic that we actually have a choice and don't all have to sign our privacy away or live in the dark ages.


Good comment - a few thoughts:

AlphaGo is impressive no doubt but has DeepMind done anything really key to Google's bottom line yet? A lot of the really sweet stuff they do doesn't have immediate commercial utility that I can see. Apple might be waiting to strike once Google has found the killer app (remember they're never really first at anything and focus holistically on the product).

Apple is benefiting from other companies releasing their research. If everyone but Apple releases the community is nearly as good, and Apple gets the pick of external and internal research while not needing to give up any of their own ideas. I know they send people to conferences and it can be a bit weird talking to someone who won't tell you anything about what they do.

Regarding researchers, I don't know of any top trend setters who've joined but they do have some very good applied ML people through direct- or aqui-hires.

Tl;dr: Apple doing what they usually do and keeping their powder dry/free loading off other's work until they can execute the product.


DeepMind reduced Google data center cooling bill by 40% in an experiment. If this effect is real, it probably more than justified the $500m tag Google paid to acquire DeepMind. Google might not have come up with a killer app, but perhaps the real killer app is not in the consumer space.

https://deepmind.com/blog

https://news.ycombinator.com/item?id=12126298


To be fair, the HN discussion was mostly focused on the lack of substantial details or an actual research paper to back up that claim. Google would probably need to show that only DeepMind, or an equivalent device, would have been presently likely to find the energy savings that it ended up finding.


Taking that at face value that's amazing.


Now extrapolate that to all datacenters in the industry.

To all factories, power plants, homes, etc.

Imagine you can cut the cooling/heating bill of 1/2 the industrial world by 40%.


up to 40%


So apple is choosing to play a prisoners dilemma game and defecting every time.


Yeah Tim Cook is the all defector.

http://exomemory.wikia.com/wiki/All-Defector


>> "Google's building models with billions of parameters that require much more than 200MB, and that are really, really good at scoring data. I have to believe either that a) Apple is not telling us everything, or b) they haven't figured out a way to bring their customers the most powerful AI yet. (And the answer could very well be c) that I don't understand what's going on...)"

The 200 MB figure quoted appears to refer only to the model stored locally on the phone. In my experience, 200 mb translates to a few million parameters in one or more sparse matrices.

The figure on the whiteboard in the background says "Hey Siri small". I take that to indicate the model that does feature extraction and prediction for some queries, such as "set a timer for 20 minutes", while there is a larger, more general model for other use cases in the cloud.


Google's way of working with Steven Levy (or The Verge) is no different. You get inside exclusive quotes if you are "nice", you get blacklisted by the company if your article comes off critical.

https://backchannel.com/how-google-is-remaking-itself-as-a-m...


There are a lot of criticisms here about Apple's secretive AI/ML development practices. But it's not unusual to Apple's long culture heritage of secrecy.

From consumer's perspective, I applaud their firm believe of customer privacy as well as pioneering consumer products based on AI/ML development AND with differential privacy in mind.[1][2]

[1] http://highscalability.com/blog/2016/6/20/the-technology-beh...

[2] http://www.imore.com/our-full-transcript-talk-show-wwdc-2016...


THis feels so much like a story done as a marketing/PR angle


Is there a way to op-out of this on iOS and macOS? That's really scary.

ML and AI on OS-level should run decentral on the device itself, and don't leak data at all. The spirit of the 1990s was that way, and we older desktop software works fine that way (even on Pentium 1 hardware), so it would run like a piece of cake on a modern smartphone.

The "differential privacy" technology may sound good, but without an independent audit who knows how good it works.


siri has gotten worse over time, or at least that's why friends and i have noticed


I've noticed it get drastically better. I never used to be able to use it at all. It was just a pointless exercise. Now it works for everything I use it for, which isn't a lot, but I don't get any issues.


I haven't noticed anything because the only thing it ever managed to do reliably is set timers so that's all I ever used it for.

and one in 20 times you'll get a "here is the timer" where it just shows you yesterdays already complete dinner timer


Siri is now a term used by Apple to cover many virtual assistant applications, not just the conversational side of things. The article goes in depth about things such as Siri Suggestions and pro-active notifications based on patterns and context.


Off topic a bit, but this article makes me wonder where does one start with Deep/Machine Learning/AI? I've seen a few posts the past few days talking about the topic (Deep Learning with Python, etc.), but what are the core requirements regarding math, statistics, programming, etc? Where should a web developer start?


Check out the stanfard MOOC class taught by Andrew Ng... that's where most people (I know of) started. https://www.google.com/search?client=safari&rls=en&q=Andrew+...

Web developer to ML? Doable but you may have to work harder than some other folks have a background in statistics, probability, signal processing etc. Plus, you don't have to be an algorithms developer... doing the backend, compute cluster is hugely valuable, too.


Thank you, I actually just found him on Coursera(https://www.coursera.org/learn/machine-learning) before your reply. I will definitely check it out.

I don't think that I want to make the switch, just know what is what and dabble a little bit.


has anyone considered the possibility that perhaps Apple has made a judgement that the recent resurgence in AI/ML is, (like all of them before, over the past 50 years) overblown, and Apple would rather spend their time and resources on other things?

There's no doubt that the recent advances in deep learning have improved ML/AI in certain specific domains ... but it seems like every 15-20 years or so we see an advance and an accompanying narrative that "AI is back! fully automated future is near!"... which fizzles out, again

Also, Apple has a more humanist tradition than Google, FB, etc, and it's my impression that they value the human element perhaps more.

Sure, there's Siri, but Siri strikes me more like an ongoing experiment than a fully fledged whole hog "let's put all our eggs in this ML/AI basket"


They were certainly at NIPS. I got a nice Apple pen and ski hat...


It isn't clear from the article do they still use Nuance?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: