For me, the greatest thing about the ML/AI community is how open it is and how strong a sense of camaraderie there is between people across the entire field, regardless of whether they're from industry or academia.
Employees from competing companies will meet at a conference and actually discuss methods.
Papers are released to disseminate new ideas and as a way of attracting top tier talent.
Code is released as a way of pretraining students in the company's stack before they ever step through the company's doors.
Papers are published on arXiv when the authors feel they're ready - entirely free to access - without waiting for a conference for their ideas to be spread.
This entire push of camaraderie has accelerated the speed at which research and implementation have progressed for AI/ML.
... but Apple are not part of that. They publish little and more broadly don't have a good track record. On acquiring FoundationDB, they nixed it, with little respect to the existing customers. Fascinating pieces of technology lost. If they aren't using the exact thing internally, why not open source it? I fear the same is likely to happen to Turi, especially sad given the number of customers they had and the previous contributions that many of Turi's researchers made to the community via their published papers.
Apple may change in the future - they may become part of the community - but a vague article of self congratulation isn't going to sway me either direction.
"We have the biggest and baddest GPU farm cranking all the time" ... Really? ¯\_(ツ)_/¯
They'd be perfectly fine with not talking about it as they've done until now if they weren't trying to counter these media narratives.
As to your point about Apple not engaging with the research community, how is this a surprise at all? Their M.O has been secrecy ever since Jobs returned to Apple.
The talent war for AI/ML is truly insane. Given that the top researchers and engineers can get similar benefits at other entities, whilst also continuing to publish code and research out in the open, Apple aren't that attractive.
They may try compensating for this via acquisitions but that still leaves a fundamental issue when it comes to long term retention.
As I mentioned before, this is a fairly universal trait. Many come from academia or have open source affiliations / rely on open source tools for their skill set, so the idea of putting their ideas out there to be used, ...
Some people don't want to work at Apple because they can't describe their job on their LinkedIn pages or tell their friends and family about what they do. Others don't like that they have to buy their own meals at Caffe Macs. I don't think Apple worries about this too much.
What they do worry about is finding people that fit Apple's culture and want to work on products used by hundreds of millions of people.
In any case, being perceived or not to be at the top of the field and being able/unable to hire the best ML/AI talent isn't an existential threat. ML/AI isn't magic, it's a technology. If there's one thing that the tech industry should have learned from Steve Jobs, it's that you make good products by working backward to the technology. In that respect, what Federighi mentioned about not having a ML/AI division was reassuring.
Let me simplify that further: "If what AI/ML talent wants is to publish...then you're right, they're not going to work at Apple". Money may or may not be a factor, but Apples secrecy won't allow the talent to publish their research. Given that AI is highly driven by research right now (no "if" about it), what talent would want to work somewhere where they can't publish research papers?
There are lot of people out there who don't particularly merit the publish-or-perish ethos of academia and would rather publish only after they have something worthwhile to publish and not just footnotes to an established scheme.
and so on.
Ehh. I would question that doing research is necessarily highly correlated with wanting to publish papers all the time. A lot of the work during product development is a mix of AI, UX and engineering. The entire output might definitely fall in research but the individual components of it might require way more fleshing out to bring each to publishable quality. Considering that we don't particularly have the publish or perish doctrine of academia, we are not particularly forced to publish at the same rate as there.
You're weighting open publishing very heavily, but others may have different weighting functions. The ability to interact with device makers may be convincing to some, or maybe others like the Apple company and culture, or Apple's privacy stance.
Criticizing the article as PR or boasting is true, but somewhat beside the point, because most of the players in this field have always -- for decades -- had a tendency to PR and boasting. Admittedly, proofs-of-concept like AlphaGo do provide substance behind the boasting, but they have a definite PR element too.
It may not be self-congratulation, but this sentence is definitely unsubstantiated slander.
Now that all come down to the same old paradox. The current patent system encourage patent troll and discourage real innovators to publish because they are too easily copied.
One alternative is open source, but when huge R&D costs are involved it's maybe not the best idea. I think Apple is currently doing a great mix, they work on some project with OpenSource (WebKit, Swift) were community feedback is important... and they keep innovative R&D technics under secrecy to monetize them.
I'm not sure why this is a "big deal" or even much of anything. Apple has always done this. Even their work on WebKit they were not always "there" with the community. This is how Apple has always worked, why would it be any different with AI / ML?
> For me, the greatest thing about the ML/AI community is how open it is and how strong a sense of camaraderie there is between people across the entire field, regardless of whether they're from industry or academia.
You get this in every field but there are also a ton of players out there not going. Just like Apple there are plenty of companies hard at work on various AI initiatives who are also not going to said events.
I think you're making a mountain out of a molehill here. Does anyone really care Apple doesn't give away free time / knowledge to people?
So what are these companies doing? Linear regression in excel? I don't see any of them beating Go players or building self-driving cars.
> Does anyone really care Apple doesn't give away free time / knowledge to people?
End users, no. Engineers and researchers, yes. Who builds the products the end users love? Engineers and researchers.
Because they aren't telling the world. On a lesser scale, CMU was building self driving cars for years before Google decided to step up the PR and tell everyone self driving cars were possible.
Google's self driving cars in one of the most advanced use of technology I've seen. It's borderline magic in our time.
Edit: removed "give credit where it's due". You weren't withholding that, apologies.
Uh, well, what are the use cases for AI / ML? That's what they're doing. They're using it all over the place. It's almost unheard of to find a small tech company not at least exploring ML to see if they can use it to gain an edge.
Why do they have to detail what they're doing so they're not automatically bucketed into the "doing linear regression in excel" camp?
> Engineers and researchers, yes. Who builds the products the end users love? Engineers and researchers.
Funny, these same practices at Apple for the past 40 years haven't turned engineers and researchers off from working there. It isn't turning them off now, either. Apple has never had to give away time and knowledge for free to attract talent. In fact most companies don't need to do this. Relatively speaking only a few companies do this.
Seems a bit entitled, in my opinion, to think technology companies have to give away their time and expertise. They're going to attract talent if they're interesting / create interesting things as a company.
Along the same vein, does Apple even employ any 'brand-name' researchers? From what I've seen, all they do is take research invented somewhere else and just apply it. I think Alex Acero is the biggest recognizable researcher that Apple employs.
So to answer your question, they don't publish and don't attend conferences because they simply don't have many researchers on board.
I think it's within their DNA.
They are a very secretive company for a lot of historical reasons.
Company culture is what companies are ... they are the social rules built into the organization that are very difficult to change once established.
Apple is within their moral right to do what they are doing. There are different approaches taken by other companies so it's healthy to have that.
If this were weapons systems or cancer cures, I might have a different opinion.
It's true, they saw what happened to Xerox PARC's ground-breaking UI work...
Smartphones would not exist unless BlackBerry created the market for them.
But they are secretive not for that reason - they are secretive because they want to announce and present things on their own terms. They don't want leakage pre-product.
If they publish, they aren't doing anything new and they haven't innovated since Steve died, and they should really just give up because there's obviously no point to anything they do and hasn't been since 1997.
If they don't publish, they're evil secretive bastards who don't contribute to the ML community and probably drown puppies or something because who knows what goes on behind closed doors?
I don't really have a dog in this fight, except inasmuch as I'm a generally satisfied iPhone owner. I just think it would be really neat if people would settle on one narrative or the other, instead of keeping on with both at once.
Many other companies are mature enough to show their cards but Apple keeps declaring victory. Apple writes its own narrative instead of participating.
There are mature ways for this to shake out and they don't include your strawman dichotomy.
But why? Who cares about papers, source code or numbers beyond the tiny segment that HN caters to? Apple has always been about the user experience. When they discuss a vast majority of prior accomplishments they discuss them, many times, in terms of the layman. Does that mean they can't manufacture? Nope. AI is simply harder to show without the papers / source code you mention but this isn't typical Apple to release that.
I honestly don't think they care to go beyond the level of details outlined here. I interviewed with the Siri team twice now and they certainly have some incredibly smart people. Whether they're "winning" against Google, Microsoft or whoever? I say who cares. They want to control their narrative without divulging too much detail like they have always done.
I don't know, say, the thousands of future engineers and researchers, that, you know, make the product?
Apple hasn't needed to do that with engineers and researchers pretty much ever; now with AI it's different? Maybe it won't help attract some small portion of people but I'm not sure why it supposedly matters now where it hasn't in the past.
Why do they need to? At the end of the day all that matters is that the products they ship are good and aren't lacking compared to competitors. The only reason they would provide evidence for what they claim other than that is to appease people calling bullshit and unless it's effecting sales no company should stoop to that.
Safari and WebKit has 100% support for ES6, which is more than any other browser - http://kangax.github.io/compat-table/es6/
I think what you mean is Safari doesn't support the standards you want it to support, some of which aren't actually full standards yet but candidates.
(And I'm not even going to touch the frankly invidious IE comparison.)
I grant testing is an issue if you don't own Apple hardware, although there are free and inexpensive remote testing services which cover that need well, and I'm certainly not about to say that Safari doesn't pose any unique difficulties to the web developer. But so does every other browser.
Common problems include items not syncing, syncing out of time order, and conflict resolution causing synced data to appear lost. Most devs that are really serious about sync tend to roll their own after experiencing enough issues to make iCloud not worth the effort.
This is entirely based on my experience and may not be universally true but I've heard enough devs repeat my own complaints to feel like it's not completely out for left field.
Safari is perfect for my use cases. Way better battery life, feels lighter than chrome, etc, etc.
What exactly in your opinion is worse about modern safari and maps that makes a worse experience for the user?
I disagree - they are not making any claims that they care to be validated. They don't care to be 'known as #1' in whatever field. We can believe what we want as far as they are concerned.
'If they are serious' - they will make this technology work to make better experiences for end consumers.
'The proof is in the pudding' - so to speak.
I for one don't doubt that their AI 'increased the intelligence of Siri'. But I also don't care about Siri and find it basically useless. AI has different kinds of impacts in different domains. If they make better experiences for us - all the power to them. If not, then not ...
They have themselves to blame, at least partially since they painted themselves into a corner. A while back they went on a media offensive proclaiming how "data-mining" the user's information is bad, and they won't stand for it in the interest of privacy... As it turns out, reality is a little more nuanced, and a bit of applied AI is in fact good for the user experience. e.g. your phone can't tell you were you parked your car unless it knows your location, and when you stopped driving.
I see this article as Apple PR rolling back on the previous position which inadvertently made Apple look like a company being overtaken by recent technological developments.
It explicitly talks about how they've accomplished a lot of this without violating their users' privacy. And that the fact that they deny AI researchers big hoards of data, hurts their reputation in the AI community, because most AI researchers want mountains of data.
My question are: A. who is violating user's privacy, B. How is Apple any different from them, besides proclaiming they aren't violating privacy?
1. Some data is collected from the device and sent to a server for processing. Until Apple does something truly radical (like 100% encrypted information that is processing on-device), they are just like every other company that opens up user's data to 'ex-filtration' by third parties.
This is exactly what Apple does, which should answer question B.
That is not true, even the puff-piece article admits as much if you read it carefully. This can be proven trivially: is Siri processed locally?
> Until Apple does something truly radical (like 100% encrypted information that is processing on-device)
You can call it trivial, I will argue otherwise. No one knows outside of Apple how big a percentage cloud-based processing is, but my point remains - Apple's does what everyone else is doing, the only difference is proportion.
Let's start with Apple's narrative. Throughout the piece, there is no direct quote mentioning any Apple innovation. The quotes are actually fairly weak:
“We’ve been seeing over the last five years a growth of this inside Apple”
“I loved [publishing at Microsoft Research] and published many papers. But when Siri came out I said this is a chance to make these deep neural networks all a reality”
“Speech is an excellent example where we applied stuff available externally to get it off the ground.”
What I get from it is that Apple has started following Google's, Microsoft's and Facebook's lead in the past five years, and has a few knowledgeable employees reusing and combining algorithms published in papers from other companies or academia.
But fundamentally, the reason they perform this PR is to start being viewed as knowledgeable in the field, so that when they release self-driving cars in 2017 or 2018, the public trusts them.
And to let us know that they're hiring: “Though today we certainly hire many machine learning people, we also look for people with the right core aptitudes and talents.” The hint is not subtle.
Your mistake here is thinking "people" are one coherent group when in fact what you're describing here is (probably) two different ends of a polarised discussion! You're probably in a more moderate central position (generally satisfied iphone owner). It may or may not be a good thing that apple is so controversial that they provoke such vigorous discussion (my personal take is that it is), but that not everybody is satisfied with "good enough" isn't a reason to silence discussion. Particularly when as a brand they're pitched as high end rather than just good enough.
It basically happens to every company, nothing to be sad about.
Yes - those in the field who prefer to publish will not be willing to join Apple. How much of an opportunity cost this is to Apple remains an open question.
> “Our practices tend to reinforce a natural selection bias — those who are interested in working as a team to deliver a great product versus those whose primary motivation is publishing,” says Federighi.
And I think they may be right. Researchers often aren't the best product creators.
You NEED researchers to build a robust autonomous vehicle or speech recognition system. Hell, you know how many electrical and materials scientists they have working on the iPhone?
Physical products actually requires science and research. Designing a responsive landing page does not.
Edison and Ford vs Einstein and Faraday.
This is a blatantly false dichotomy: there's nothing about publishing that precludes anyone from "working as a team to deliver a great product"
> Researchers often aren't the best product creators.
Tell that to Andrew Ng!
He's made enormous contributions to human knowledge, but doesn't seem to be interested in the hard work of bringing a product to market.
He even left Coursera to return to theory.
what exactly do you define as "the hard work of bringing a product to market"? To me, his work at Google certainly applies as bringing a product to market - the cat recognition deeplearning AI is a product.
That led to an interesting paper and a lot of potential applications.
By "the hard work of bringing a product to market", I meant the process of perfecting the product for actual applications.
> By "the hard work of bringing a product to market", I meant the process of perfecting the product for actual applications
I'll have to disagree - "perfecting the product" is one of many roles required to bring a product to market. Other roles that are just as important are "dreamer/visionary", "practical starter/founder", "scale-up person". Sometimes one person can assume multiple roles, but rarely all roles, all of them are hard work.
Any company who want to win in the long run will have to take control of machine learning as a central part of their culture.
I don't think you get to claim PR credit for advances in ML unless you publish. In general, for R&D, IMHO, you need to publish. There's product R&D, and there's fundamental R&D. If you make an advancement in something fundamental, but that helps your product, then publish it. If it is specific to your product only and can't be transferred elsewhere, then maybe it's ok to keep it secret.
Apple and Google's competitive advantage now arises from scale and path dependency. I think they need to let go of this idea that somehow they derive a competitive advantage by keeping these things secret. The Open AI community is going to advance at an accelerated rate regardless and IMHO, it's better to be part of it than to be seen as a kind of parasite that consumes public R&D, but doesn't give back improvements.
wouldn't it be the other way around? if the competitors can benefit from your knowledge, you'd want to keep it secret.
I think a culture of secrecy yields local optima. Only if you believe (and is true) your company has unique geniuses that can't benefit from other people reviewing their science will secrecy benefit you.
IMHO, only research that is useless to your competitors is research to keep secret in the sense that it is too specific to your own proprietary dependencies.
This whole paragraph must be a joke. Google started doing this since way too long and they don't even publish these as their best features.
>You see it when the phone identifies a caller who isn’t in your contact list (but did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar. Or when a map location pops up for the hotel you’ve reserved, before you type it in. Or when the phone points you to where you parked your car, even though you never asked it to
That won't get you any battery life. Just use low power mode and turn down the backlight if you need more.
But not publishing your advancements harms the community greatly. Its like building your product entirely with open-source software (the published work of other researchers) and not contributing back.
It's been effectively TiVo'ed, or turned into shared-source-ish instead of open enough to run it as a real OS.
Apple built a proprietary product using open source technology, and are now building proprietary products using open ML research.
But it seems they're doing so for their own profit, not to benefit the open source and research communities.
You say it as its somehow contradictory. Open Source licences allow (LGPL etc), and some even welcome (BSD, MIT, Apache) creating proprietary software based on open source technologies.
Besides, Apple also did extend open source technologies (as open source) a hell of a lot. Webkit past Apple is 100x bigger/fancier/better than the puny KHTML it started from. Tons of LLVM work, especially all the early stuff, has been sponsored by Apple. Swift was made Open Source just recently...
I think that philosophy will continue as they use ML tools. They might share the ML equivalent of plumbing like Webkit/LLVM/Swift, but probably not improvements to the user experience like Siri's brain.
So? Why should they open source and commoditize their core product?
Besides, I don't know any company that did it and got much out of it, except for some gratitude from OSS fans.
My impression is that Apple invested vastly more in the closed parts of OS X than the open parts.
Here is the kernel: http://opensource.apple.com/source/xnu/xnu-3248.60.10/
What's above it is not rocket science and the argument could be made that it would hurt apple open-sourcing it. (Whether it's true or not is irrelevant, it's a valid argument).
But this research is not.
Apple's PR is notorious for cracking the whip, which means that the "inside story", if they give it to you, comes with a warning to the journalist to behave and be nice. Levy's piece is generous with flattery and cautious with criticism. He quotes Kaplan and Etzioni high and briefly in the piece, and spends the rest of it refuting them. Apple will give him another inside story down the road.
Apple has a big question to resolve for itself about the tools it's going to use to develop this. It can't go with Tensorflow, because TF is from Google. It's kind of at another turning point, like the one in the early 90s when it needed it's own operating system and Jobs convinced them to buy next and use what would become OSX.
The most pointed question to ask is: What are they doing that's new? The use cases in the Levy story are neat, and I'm sure Apple is executing well, but they don't take my breath away. None of those applications make me think Apple is actually on the cutting edge. There's no mention of reinforcement learning, for example; there is no AlphaGo moment so far where the discipline leaps 10 years ahead. And the deeper question is: Is Apple's AI campaign impelled by the same vision that clearly drives Demis Hassabis and Larry Page?
We see what's new at Google by reading DeepMind and Google Brain papers. Everyone else is letting their AI people publish, which is a huge recruiting draw and leads to stronger teams. Who, among the top researchers, has joined Apple? Did they do it secretly? (This is plausible, and if someone knows the answer, please say...) The Turi team is strong, yes, but can they match DeepMind? If Apple hasn't built that team yet, what are they doing to change their approach?
Another key distinction between Apple and Google, which Levy points out, is their approach to data. Google crowdsources the gathering of data and sells it to advertisers; Apple is so strict about privacy that it doesn't even let itself see your data, let alone anyone else. I support Apple's stance, but I worry that this will have repercussions on the size and accuracy of the models it is able to build.
> “We keep some of the most sensitive things where the ML is occurring entirely local to the device,” Federighi says.
Apple says it's keeping the important data, and therefore the processing of that data, on the phone. Great, but you need many GPUs to train a large model in a reasonable amount of time, and you simply can't do that on a phone. Not yet. It's done in the cloud and on proprietary racks. So when he says they're keeping it on the phone, does he mean that some other encrypted form of it is shared on the cloud using differential privacy? Curious...
> "How big is this brain, the dynamic cache that enables machine learning on the iPhone? Somewhat to my surprise when I asked Apple, it provided the information: about 200 megabytes.."
Google's building models with billions of parameters that require much more than 200MB, and that are really, really good at scoring data. I have to believe either that a) Apple is not telling us everything, or b) they haven't figured out a way to bring their customers the most powerful AI yet. (And the answer could very well be c) that I don't understand what's going on...)
 If they have a JVM stack, they should consider ours: http://deeplearning4j.org/
I don't think this is something we should worry about. If you want better models use Google. If you want better privacy go with Apple. It's fantastic that we actually have a choice and don't all have to sign our privacy away or live in the dark ages.
AlphaGo is impressive no doubt but has DeepMind done anything really key to Google's bottom line yet? A lot of the really sweet stuff they do doesn't have immediate commercial utility that I can see. Apple might be waiting to strike once Google has found the killer app (remember they're never really first at anything and focus holistically on the product).
Apple is benefiting from other companies releasing their research. If everyone but Apple releases the community is nearly as good, and Apple gets the pick of external and internal research while not needing to give up any of their own ideas. I know they send people to conferences and it can be a bit weird talking to someone who won't tell you anything about what they do.
Regarding researchers, I don't know of any top trend setters who've joined but they do have some very good applied ML people through direct- or aqui-hires.
Tl;dr: Apple doing what they usually do and keeping their powder dry/free loading off other's work until they can execute the product.
To all factories, power plants, homes, etc.
Imagine you can cut the cooling/heating bill of 1/2 the industrial world by 40%.
The 200 MB figure quoted appears to refer only to the model stored locally on the phone. In my experience, 200 mb translates to a few million parameters in one or more sparse matrices.
The figure on the whiteboard in the background says "Hey Siri small". I take that to indicate the model that does feature extraction and prediction for some queries, such as "set a timer for 20 minutes", while there is a larger, more general model for other use cases in the cloud.
From consumer's perspective, I applaud their firm believe of customer privacy as well as pioneering consumer products based on AI/ML development AND with differential privacy in mind.
ML and AI on OS-level should run decentral on the device itself, and don't leak data at all. The spirit of the 1990s was that way, and we older desktop software works fine that way (even on Pentium 1 hardware), so it would run like a piece of cake on a modern smartphone.
The "differential privacy" technology may sound good, but without an independent audit who knows how good it works.
and one in 20 times you'll get a "here is the timer" where it just shows you yesterdays already complete dinner timer
Web developer to ML? Doable but you may have to work harder than some other folks have a background in statistics, probability, signal processing etc. Plus, you don't have to be an algorithms developer... doing the backend, compute cluster is hugely valuable, too.
I don't think that I want to make the switch, just know what is what and dabble a little bit.
There's no doubt that the recent advances in deep learning have improved ML/AI in certain specific domains ... but it seems like every 15-20 years or so we see an advance and an accompanying narrative that "AI is back! fully automated future is near!"... which fizzles out, again
Also, Apple has a more humanist tradition than Google, FB, etc, and it's my impression that they value the human element perhaps more.
Sure, there's Siri, but Siri strikes me more like an ongoing experiment than a fully fledged whole hog "let's put all our eggs in this ML/AI basket"