Hacker News new | past | comments | ask | show | jobs | submit login

This piece is interesting because Apple was saying this all along but no one really believed them because it sounded like excuse making. But here JG is basically saying the same thing:

>Yes, I understand this perception of bigger models in data centers somehow are more accurate, but it's actually wrong. It's actually technically wrong. It's better to run the model close to the data, rather than moving the data around. And whether that's location data—like what are you doing— [or] exercise data—what's the accelerometer doing in your phone—it's just better to be close to the source of the data, and so it's also privacy preserving.

A few years ago was when this narrative was at its peak and I believe it was mostly because Google (and to a lesser extent Facebook) were talking about machine learning and AI in basically every public communication. What came of it? Were all the people who claimed Apple's privacy stance would leave them in the dust proven right? For one, being "good at machine learning" is like saying you're good at database technology. It's a building block, not a product. Maybe Google and Facebook are doing cutting edge research in the field, but so was Xerox PARC.




When it comes to machine learning, the subtlety here is that there are at least two sides or facets to machine learning: (1) training and (2) inference.

It's fair to say that there are multiple areas for AI leadership.

It is generally believed that:

Model Creation

(1) Those with the access to the best (which is not necessarily the most, but often believed to be) data have a strong starting point for training models; because of this Google, Facebook, and Microsoft have often been attributed to have this advantage due to the nature of their businesses.

Model Application

(2) Inference/prediction at the edge, e.g. on-device, is believed to be the best point for applying those models; this can be for a variety of reasons, including latency and other costs associated with sending model input data from edge sensors/devices. Some applications are entirely impractical or likely impossible to achieve without conducting inference on-device. Privacy-preservation is also a property of this approach. Depending on how you want to view this, this property could be a core design principle or a side-effect. Apple's hardware ecosystem approach and marketshare (i.e. iPhones) provide a strong starting point for making the technology ubiquitous for consumer experiences.


Re: Prediction at the edge, I would think that it's better if there aren't going to be any updates to the model. Or if internet access is limited. Correct me if I'm wrong, but most of the ML inference actually takes place on the cloud nowadays, not on-device.


Here's a nice example that like a lot of the best things is always passively present. My Pixel 2 knows what music it hears.

There's a pop song playing, I kinda like it. I could pay attention to the lyrics and try to Google them or ask somebody that might know what it is... no need, I just look at my phone, "Break My Heart by Dua Lipa" it says on the lock screen. The phone will remember it heard this, so if I get home this evening and check what was that... oh, "Break My Heart by Dua Lipa".

Google builds a model and sends it to phones that opted in to enable this service. It's not large, and I actually don't know how often it's updated - every day? Every week? Every month? No clue. But the actual matching happens on the device, where it's most useful and least privacy invading.


IIRC, for example phone keyboards use federated learning, where the model is further trained locally. You don't want to send every word the user types, for privacy reasons and others. Some kind of "diff" of the local model can then be sent to the cloud at some time to add up to the base model.

There is still an attack vector, you can infer a bit from the "diff", but you probably can't tell exactly what the user wrote.


But can you lead in (2) without being good at (1)?

If you train on limited data, then your inferences will be of poor quality, even if they have low latency.

So in sum, it seems you can't be a leader in ML without both.


You can lead in (2) with good enough (1).

Being a leader in (1) does not mean you'll be good at (2), and vice versa.

There's also a difference between limited data and good enough data.

If you train on good enough data, you can have good enough models.

If people believe the focus of AI/ML should just be precision/recall, or other measures of accuracy, and having tons of data, they're missing many other areas and elements for what make AI/ML successful in application.


It's not my area, but I read Google's AI blog and they seem to be doing machine learning research that ends up in its phones, such as various camera improvements and offline Google Translate.

It seems like it's the sort of thing you do when you can, but often it comes as a second phase after getting it to work in the data center.

I didn't see anything in this article that was obviously unique to Apple.



I think a lot of people have underrated the benefits Apple reaps in this area from being so far ahead in chip design and controlling their hardware. As regards to successfully productizing ML I’ve recently become convinced that Apple has been more successful at it than Google so far. Translation was already well within Google’s wheelhouse and they had been doing it for years, and their work with cameras is going to inevitably be hindered by the fact that they can’t depend on certain specs the way Apple can. I suspect the Pixel exists at least partially to prove that they understand those benefits.


Apple is bad at AI because Apple is bad at software, and increasingly bad at common sense. Being good at hardware won't compensate for that.

Example: Apple Maps regularly thinks I need help finding my way home from locations close to where I live. Some basic practical intelligence would understand that I have visited these places before and there's a very good chance I already know my way home.

It would know that I would appreciate a route if I'm a couple hundred miles from home at a location I've never been to. But a shopping trip to the next town fifteen minutes away? Thanks, but no - that's Clippy-level "help."

IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.

Siri was a magnificent start but it seems Apple not only failed to build on it or take it further but actively scaled it back to save a few dollars in licensing fees.

Google is doing better by looking at specific applications - like machine translation. But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.


I’m not so sure your example is the best at “bad common sense” - I can see a ton of use cases where this is useful for a lot of users. For example, you live in the suburbs next to a highway, and your route home is usually 15 min. You get the notification and see today it’s 30 min because there is an accident. Instead you take the back roads or wait at work a little longer for it to clear. The directions aren’t the value, the time estimates are because in your mind you can’t know current traffic conditions of the highway. It’s a replacement for the 4:30 PM traffic updates on the radio station.

Also it works well with the “share ETA” feature where you can automatically share your ETA with family when you start directions home.

And anecdotally, my google home does the same thing at 5pm every day...but it gives me directions “home” from “work” when it is actually routing me to my old house which I moved from 6 months ago. My home address is updated and the old one removed, so


Yeah, I just completely disagree. I agree that Maps' machine learning isn't very good but I've been completely unimpressed with Google's as well.

>IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.

This has been a meme for a few years now and I don't know what the basic quality that's declining is. Their development has arguably accelerated and it doesn't seem like it has any more bugs than normal. I agree that Siri has not been advanced as much as it should have been but it seems like they're working on it.

>But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.

I'm not sure what you mean by "general AI" but I think Apple has the best shot at it of any company working right now (unless you mean AGI).


Even there, Google is hindered by the fact that it is so bad at hardware. The Pixel's camera lead is not as much as it used to be, and now Apple is ahead on video. The newest phone from Google uses the same sensor as the Pixel 2, whereas others have moved onto bigger sensors with quad-bayer filters.


Yep.


Pixel is Google being aware of their handicap in this in-house hardware prototyping and research area, and ironically and obviously and predictably for Google, they apply ML to it too.

Apple can come at this same problem from the hardware and software sides all at once, with their own internal dev cycles aligning with the yearly iOS/iPadOS software drops, and iPhone/iPad hardware drops timed to a month or two of each other, year after year. Sure, it’s buggy as hell, but it still works better than Android. One would hope so, since you can’t do proper sideloading, as Android natively supports.

Apple’s security argument is a childlike excuse for not doing one’s homework, not a reasonable justification for Apple unreasonably and intentionally feature-gating iOS and iPadOS devices. I’m an owner and I’m root. I fully control devices I own because that is a Natural Right of exclusive ownership of hardware devices under American First Sale Doctrine, which has also been recognized by the Library of Congress as Constitutionally-protected usage to preserve inalienable rights to nondiscrimination in computing devices.

So Apple can take a hike. We’re getting what we need, and we’re getting bugs fixed. Jailbreaking has surfaced more 0days than jailbreak devs squirrel away for the next rainy day after iOS updates. We need native full root support, full stop.

These research phones have landmines everywhere in the license agreements to get one, and I’m not re-buying an iPhone I already own to get half-assed fakeroot, especially when I’m already running as root on all the iPhones I ever bought. It’s not hard to avoid updates and save blobs, but should we really have to intentionally run old, known-insecure builds just to have r/w and root? Is this a mainframe? This is a joke.

https://developer.apple.com/programs/security-research-devic...

What is Apple even arguing against? It’s not reality, that ought to be clear. They’re arguing for increased and continued profits for Apple, at the cost of our rights being trampled and violated, and were supposed to accept that they had our best interests at heart via increased security? Tell me another joke. Benjamin Franklin and me’ll be here all life.

The lack of free sideloading without a $99/year dev ticket is a joke. The scraps we get with 7 days between resigning apps is a joke. Devs are forced to abuse TestFlight to distribute apps which would otherwise be listed on App Store, if not for developers’ fears of App Store rejection, and potentially TestFlight revokes.

There’s gotta be a better way, but jailbreaking is the best we’ve got, for now. To that point, the Jailbreak Bot on Telegram is a public service, as is saurik himself and the entire reddit community r/jailbreak.

https://t.me/rJailbreakBot

https://old.reddit.com/r/jailbreak

All that being said, Apple really is in a league of its own, with the market capitalization to back it up; Apple has features found in other companies while simultaneously being unlike every other company on Earth.

[Current]

https://developers.google.com/ml-kit/vision/object-detection...

[2018]

https://www.fastcompany.com/90247454/the-pixel-3-puts-google...


How is this at all related to discussing Apple’s position in AI/ML?


I agree that iOS should have some escape hatch where power users can sideload but the problem is that the current situation is really good for privacy. Apple is able to act as a powerful agent to the powerless user to force companies to respect the users device and privacy. You can see on android the same apps tracking the user in ways that just wouldn't be allowed on iOS.

If there was a way to sideload apps then Amazon would probably create a 3rd party app store to compete with the main one. This would probably result in major apps moving over so they can abuse users in all kinds of ways banned from the app store.

I'm not really sure what the middle ground is here where iOS continues being the privacy OS while also letting the user do whatever they want.


That’s not true. Jailbreaks exist, and always will. That’s the current status quo. Opening up sideloading will surface more bugs. Apple already has best-in-class built-in 2fa for Apple ID. To say they can’t just leverage Secure Enclave to do the heavy lifting is just to fail to make Apple’s argument for them. Security would benefit, because Apple would have more devs seeing and reporting more bugs if sideloading were possible. It’s a simple truism. Look at the new Microsoft.


In a sense maybe the iPhone (and Android phone) is a mainframe?

The user shares the system with ... the apps.


It still sounds like excuse-making because Apple are behind in their own chosen terms: their competitor launched on-device machine translation a long time ago. The only remaining part of Apple’s AI privacy story is insinuation.


Well, google’s predictive keyboard and speech to text leave Apple in the dust. As does their voice assistant. So... yes?


I’m not an Apple fankid but the predictive keyboard and speech to text work wonderfully on my iPhone and I use Siri everyday with no problems. Google’s is probably better, but the different is not a meaningful one from what I can tell.


As someone that bought the first Android phone and now has an iPhone...it is extremely noticeable how much better Android is on these fronts. But depending on the person that gap may not be important. For me, it is night and day. I felt Google had a long way to go on these fronts, so I was stunned at just how bad Apple was :(.


As someone who switched to iOS from android I’ll second this.

Also googles keyboard swipe is WAY better than the iOS implementation


Just switched from android to iOS and the major issue is it tries to correct things that are not real words. Somehow the android one was able to detect better when something is a typo of a common word or just something it doesn't know.


that's pretty much what I hear for anybody that has switched to iOS after a while on Android : they start by complaining about iOS's keyboard


If we're just going to give anecdotes, I'm usually an Apple fankid but I get super annoyed by autocorrect on iPhone. The Google one when I've used it does seem a lot better, and also faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: