>Yes, I understand this perception of bigger models in data centers somehow are more accurate, but it's actually wrong. It's actually technically wrong. It's better to run the model close to the data, rather than moving the data around. And whether that's location data—like what are you doing— [or] exercise data—what's the accelerometer doing in your phone—it's just better to be close to the source of the data, and so it's also privacy preserving.
A few years ago was when this narrative was at its peak and I believe it was mostly because Google (and to a lesser extent Facebook) were talking about machine learning and AI in basically every public communication. What came of it? Were all the people who claimed Apple's privacy stance would leave them in the dust proven right? For one, being "good at machine learning" is like saying you're good at database technology. It's a building block, not a product. Maybe Google and Facebook are doing cutting edge research in the field, but so was Xerox PARC.
It's fair to say that there are multiple areas for AI leadership.
It is generally believed that:
(1) Those with the access to the best (which is not necessarily the most, but often believed to be) data have a strong starting point for training models; because of this Google, Facebook, and Microsoft have often been attributed to have this advantage due to the nature of their businesses.
(2) Inference/prediction at the edge, e.g. on-device, is believed to be the best point for applying those models; this can be for a variety of reasons, including latency and other costs associated with sending model input data from edge sensors/devices. Some applications are entirely impractical or likely impossible to achieve without conducting inference on-device. Privacy-preservation is also a property of this approach. Depending on how you want to view this, this property could be a core design principle or a side-effect. Apple's hardware ecosystem approach and marketshare (i.e. iPhones) provide a strong starting point for making the technology ubiquitous for consumer experiences.
There's a pop song playing, I kinda like it. I could pay attention to the lyrics and try to Google them or ask somebody that might know what it is... no need, I just look at my phone, "Break My Heart by Dua Lipa" it says on the lock screen. The phone will remember it heard this, so if I get home this evening and check what was that... oh, "Break My Heart by Dua Lipa".
Google builds a model and sends it to phones that opted in to enable this service. It's not large, and I actually don't know how often it's updated - every day? Every week? Every month? No clue. But the actual matching happens on the device, where it's most useful and least privacy invading.
There is still an attack vector, you can infer a bit from the "diff", but you probably can't tell exactly what the user wrote.
If you train on limited data, then your inferences will be of poor quality, even if they have low latency.
So in sum, it seems you can't be a leader in ML without both.
Being a leader in (1) does not mean you'll be good at (2), and vice versa.
There's also a difference between limited data and good enough data.
If you train on good enough data, you can have good enough models.
If people believe the focus of AI/ML should just be precision/recall, or other measures of accuracy, and having tons of data, they're missing many other areas and elements for what make AI/ML successful in application.
It seems like it's the sort of thing you do when you can, but often it comes as a second phase after getting it to work in the data center.
I didn't see anything in this article that was obviously unique to Apple.
Example: Apple Maps regularly thinks I need help finding my way home from locations close to where I live. Some basic practical intelligence would understand that I have visited these places before and there's a very good chance I already know my way home.
It would know that I would appreciate a route if I'm a couple hundred miles from home at a location I've never been to. But a shopping trip to the next town fifteen minutes away? Thanks, but no - that's Clippy-level "help."
IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.
Siri was a magnificent start but it seems Apple not only failed to build on it or take it further but actively scaled it back to save a few dollars in licensing fees.
Google is doing better by looking at specific applications - like machine translation. But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.
Also it works well with the “share ETA” feature where you can automatically share your ETA with family when you start directions home.
And anecdotally, my google home does the same thing at 5pm every day...but it gives me directions “home” from “work” when it is actually routing me to my old house which I moved from 6 months ago. My home address is updated and the old one removed, so
>IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.
This has been a meme for a few years now and I don't know what the basic quality that's declining is. Their development has arguably accelerated and it doesn't seem like it has any more bugs than normal. I agree that Siri has not been advanced as much as it should have been but it seems like they're working on it.
>But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.
I'm not sure what you mean by "general AI" but I think Apple has the best shot at it of any company working right now (unless you mean AGI).
Apple can come at this same problem from the hardware and software sides all at once, with their own internal dev cycles aligning with the yearly iOS/iPadOS software drops, and iPhone/iPad hardware drops timed to a month or two of each other, year after year. Sure, it’s buggy as hell, but it still works better than Android. One would hope so, since you can’t do proper sideloading, as Android natively supports.
Apple’s security argument is a childlike excuse for not doing one’s homework, not a reasonable justification for Apple unreasonably and intentionally feature-gating iOS and iPadOS devices. I’m an owner and I’m root. I fully control devices I own because that is a Natural Right of exclusive ownership of hardware devices under American First Sale Doctrine, which has also been recognized by the Library of Congress as Constitutionally-protected usage to preserve inalienable rights to nondiscrimination in computing devices.
So Apple can take a hike. We’re getting what we need, and we’re getting bugs fixed. Jailbreaking has surfaced more 0days than jailbreak devs squirrel away for the next rainy day after iOS updates. We need native full root support, full stop.
These research phones have landmines everywhere in the license agreements to get one, and I’m not re-buying an iPhone I already own to get half-assed fakeroot, especially when I’m already running as root on all the iPhones I ever bought. It’s not hard to avoid updates and save blobs, but should we really have to intentionally run old, known-insecure builds just to have r/w and root? Is this a mainframe? This is a joke.
What is Apple even arguing against? It’s not reality, that ought to be clear. They’re arguing for increased and continued profits for Apple, at the cost of our rights being trampled and violated, and were supposed to accept that they had our best interests at heart via increased security? Tell me another joke. Benjamin Franklin and me’ll be here all life.
The lack of free sideloading without a $99/year dev ticket is a joke. The scraps we get with 7 days between resigning apps is a joke. Devs are forced to abuse TestFlight to distribute apps which would otherwise be listed on App Store, if not for developers’ fears of App Store rejection, and potentially TestFlight revokes.
There’s gotta be a better way, but jailbreaking is the best we’ve got, for now. To that point, the Jailbreak Bot on Telegram is a public service, as is saurik himself and the entire reddit community r/jailbreak.
All that being said, Apple really is in a league of its own, with the market capitalization to back it up; Apple has features found in other companies while simultaneously being unlike every other company on Earth.
If there was a way to sideload apps then Amazon would probably create a 3rd party app store to compete with the main one. This would probably result in major apps moving over so they can abuse users in all kinds of ways banned from the app store.
I'm not really sure what the middle ground is here where iOS continues being the privacy OS while also letting the user do whatever they want.
The user shares the system with ... the apps.
Also googles keyboard swipe is WAY better than the iOS implementation