Its behavior is totally bizarre. It’s like an underpowered chess engine that makes flagrant blunders: capitalizing random words in the middle of a sentence (rock -> Rock), the context sensitivity of an actual rock, forcing the same correction multiple times (i.e. you go back to fix its error, and it defiantly repeats it), contraction mixups, blindness to off-by-one-keystroke errors (consentuallt -> <no action>), and of course the occasional random word substitution. Only martial artists want to duck people (and it—no joke—just now substituted [duck -> suck]. You had one job autocorrect.)
Is this just me? Is it actually 2020? What is going on?
PS: iOS just did this twice while typing this reply.
Edit: I also bought all of my iOS devices in Hong Kong so, there's that.
Most other devices I bought elsewhere, except for those I was dead sure I’m going to keep. As far as I can tell HK is the only place in the world where Apple retail has a no returns policy.
Yeah, we can thank the x-border gray market for that. In the beginning it was...flexible, in that if you obviously weren’t a smuggler they’d quietly accept the return, but I haven’t needed to test that in a while.
Sync works completely randomly on mac between the iPhone and and OSX, - you often have to wait "a long time", and since Apple also has the completely idiotic concept of "everything has to be magic" you never get an estimate, or an error message, or a progress bar, so like everything on a mac, you have to just fiddle, disconnect, reconnect, wait, and just hope and pray that things will work out.
And then we haven't even touched on the their productivity apps like Numbers and Pages which for a long time had features in the templates which weren't available in new documents, so you had to start from a template, completely insane.
Still use Mac and like it for the most part compared at least to Windows which can't even get search right without showing randoms ads and internet results to you - another pretty crazy thing to push to production for a billion people.
It's just pathetic nonsense to congratulate anyone on this level of nonsense.
It resets the entire thing though.
Companies should really just grow the fuck up and stop doing stupid, cutsey things like pretending adults really just typo a reference to a bird regularly.
I would leave iOS solely for this reason if it were not for the fact that Google are as bad.
The T9 input system is better.
Yes, but if it's not even possible to get auto-rotate to function correctly most of the time (to the point it becomes funny, countless are the times I've seen people yell at their device because either it doesn't rotate when wanted or rotates when not wanted), how do you expect something way more complicated to function appropriately? Only partially kidding here; this isn't criticism of the fact autocorrect doesn't work as good as it should, it's just that it's really hard to solve, but the point of failure imo is that there are other ways not involving AI or ML which might, as long as the former doesn't work better, actually be better suited and lead to results faster. E.g. something as simple as just adding anything ever typed to the dictionary and sort by number of usages - eventually getting rid of items with only one usage ever assuming they were typos or so. Which is roughly how phones used to do it pre-full-keyboard-era, I assume, and at least for me that actually worked pretty well and definitely with less mistakes. Maybe not as fast though. But I'm personally not interested in trading correctness for speed.
-swipe down the settings panel
-swipe up the settings panel
-let it reorient
I NEVER want to rotate the home page. NEVER
As for specific errors, does rock capitalize when "the" is before it? (The Rock) The models learn from how users write. But when you think about it, it is actually fairly difficult to predict what words are going to be, you can test it with your friends. Have them create a sentence and feed you one word at a time. The better you know the person the better you'll do, but I'm guessing you still won't be that good.
And no, I don’t often discuss The Rock, but if I did (“Do you smell what the r...”) I'd expect it to finish the alley-oop. I do not want to go Rock climbing.
iMessage is software that is used <Carl Sagan voice> billions and billions of times a day, and half the time it feels like verbal jiu-jitsu (“jiu-hursuit” according to autocorrect just now) with Dory the fish.
> if the word is in the dictionary, then don’t change it
Actually the thing that bugs me the most is that when there's a word that is in the dictionary that's long and complex and it doesn't suggest it, even when I have the right word. It just isn't underlined red. Or a one off error with these words and it has no idea what I'm trying to spell.
They have a long way to go. These systems aren't just ML btw.
On a day to day basis I do use 4 different languages, sometimes even mixing them in a single conversation (i.e the other person is bi/trilingual too). For me the autocorrection is only a distant dream and will probably remain so for a long time.
Human language is actually just one vast system, it isn't best understood as a bunch of nicely compartmentalized independent languages. So in a way it would actually be more wrong for an AI model to insist that "Beacoup money" isn't a reasonable thing for the user to want to write, just because those two vocabulary words don't co-exist in a single defined language. The user may or may not know that, but they certainly don't care.
It looks like it uses all the dictionaries so suggestions appear from all languages for every word you're typing.
Sure, some people would say “I went to Menlo for lunch today”, but it should always have the option of “Park” (which when I type it, it always knows to capitalize).
Just an aside: Although common perception is that "weapons-grade" refers to the highest quality, in reality it refers to the lowest-possible quality that meets the bare minimum requirements!
You might as well refer to the bronze medal as “just barely enough to be in the awards ceremony”. Is that supposed to be clever somehow?
I thought the parent comment was mildly interesting. There's no need to be so combative.
FWIW, your comment above (about standards) was also mildly interesting to me. I hadn't really given the phrase much thought.
There was no need to be rude about it though.
However, SwiftKey might be more conservative and feels a bit slower at times, but its autocorrect and prediction do a really good and consistent job.
For example if I type "this is not the" it suggests the next word might be "case".
Plus, I feel like my ability to type things correctly might suffer from crutches like this.
The same goes for voice recognition, when it tries to squeeze a specific name into a set of common words. It's just bizarre.
In the past, when I tried the Google voice assistant or Siri, I always had to bring my best american accent (almost comical), otherwise they would often not understand.
I'm not sure I believe that.
So, what would make you believe it?
I have no idea what is going on, but the answer is installing SwiftKey
Fwiw Swiftkey was a paid app a few years ago. Then they became free. How do they still earn money to keep the company running ?
(I always had an Android phone but I assume it's also free on iOS)
I have rather petite hands and fingers, I should note.
I think you found the problem
These kind of reality distortion pieces aren't going to help them. They have $100+B, in cash, they can easily start reputable open research lab that can rival FAIR, OpenAI or DeepMind. Even a smaller companies like Intel and Adobe is starting to realize that this is necessary so they can tap into expertise on demand. At minimum that will be totally worth for a talent pipeline that can be motivated to do "rotation" or "sabbaticals" into product groups from an open lab.
Of course, the Sculley/Spindler/Amelio Apple seemed to be far more open regarding research than the Jobs- and Cook-era Apple. Jobs closed down the labs in 1997 and help institute Apple's famous culture of secrecy that persists today.
There's also a lot more to making useful AI/ML-powered technology than having good models.
Defining and limiting the field to just research and models is a common pitfall I've seen.
It was less of an issue with the older MacBook airs, but the Pros now have a giant oversized trackpad that you have to touch when typing unless you hover.
I don't own or use Apple products so this takes more explaining for me.
Most of these problems cannot be simply solved by just throwing in best engineers. No amount of classical algorithms you learned as CS major is going to help you implement the best solutions for these problems. The state of art solutions to these problems requires intense narrowly focused researchers who have studied these problems for many years, knows which 10% of the papers even worth looking at and pro/cons of different techniques. Something as benign as running neural net on phone hardware is intensely researched subject and your implementation can be literally 10X to 100X better at speed and power consumption if you have kept up with the field.
Some might now be done via AI, but is it done any noteceably better than 3-4 years ago? I certainly don't trust my camera to adjust itself, or the sleep analysis to be any good, or the article recommendation to not recommend based on outrage, or the spam filter to work at all.
How is this measured?
What does he mean by this? Google search and Android are both used by more than a billion people. YouTube over 2 billion. These are all bigger than any Apple product. If anything, Google is known for shipping consumer experiences that reach a large number of people. Apple by contrast is known for its high-quality, high-price consumer experiences that reach fewer people.
>they're not known for shipping consumer experiences
IMO, difference is not in selling 'consumer experiences' but selling 'consumer aspirations', which you rightly pointed out is not for many.
Google is known for shipping experiences that are used by hundreds of millions of people.
Unfortunately that model means they have to ignore everyone individually.
Android also does on-device. A lot of the time, things start out on-cloud, until whatever machine learning model can be shrunk to run on-device at the right performance. So you see for example, text-to-speech and speech-to-text start out as cloud calls, and now they're on-device. Google Translate ran in the cloud, but now for some languages, it happens on-device.
Things like Google's "Live Transcription/Caption" on Android wouldn't work if it wasn't on-device.
Apple similarly went to the cloud for Siri speech recognition and TTS until they could run it locally.
For other things which need large models, there is Federated Learning to preserve privacy. Google Keyboard has been using Federated Learning for some time now.
It starts by talking about how until recently Apple wasn't doing AI work where it needed to be. Then there's the weird excerpt where he claims Google is “not known for shipping consumer experiences that are used by hundreds of millions of people.” The author then raises the legitimate point that AI benefits from having lots of data to train on, but then quotes an answer by Giannandrea to a different question, which includes him stating that bigger models aren't more accurate than smaller ones. The point that on-device inference is more responsive is valid but not unique to Apple; the article says “Android phones don't do nearly as wide an array of machine learning tasks locally”, but I don't think this is true.
I would agree with that characterization of Google. They have a very small handful of successes and an enormous pile of failures that they’ve discarded. They give every impression of not really knowing what to do.
I think Giannandrea was also referring specifically to the kinds of experiences Apple can provide on the iOS and iPadOS platform because of their high end hardware and deep software integration. Google has yet to replicate that, and even seem to be bored with Android recently: https://daringfireball.net/linked/2020/08/05/wear-os-music
> I would agree with that characterization of Google.
No. It's fine to dislike Google's stuff, but words have meanings. The claim that Google doesn't ship ‘consumer experiences’ to hundreds of millions of people is objectively false.
>No. It's fine to dislike Google's stuff, but words have meanings. The claim that Google doesn't ship ‘consumer experiences’ to hundreds of millions of people is objectively false.
Sure, they have some experience. But their core business is not shipping experiences that consumers pay them for. That is all.
The only thing I can think of is that think they demoed that would call restaurants to book them for you, but that was clearly highly experimental, and it's not like Apple has done that.
I really do want to see them succeed I guess I'm just overall disappointed with where they're at right now and I think they've somewhat overhyped it. I'm not a voice assistants person, I disable Siri personally.
>Yes, I understand this perception of bigger models in data centers somehow are more accurate, but it's actually wrong. It's actually technically wrong. It's better to run the model close to the data, rather than moving the data around. And whether that's location data—like what are you doing— [or] exercise data—what's the accelerometer doing in your phone—it's just better to be close to the source of the data, and so it's also privacy preserving.
A few years ago was when this narrative was at its peak and I believe it was mostly because Google (and to a lesser extent Facebook) were talking about machine learning and AI in basically every public communication. What came of it? Were all the people who claimed Apple's privacy stance would leave them in the dust proven right? For one, being "good at machine learning" is like saying you're good at database technology. It's a building block, not a product. Maybe Google and Facebook are doing cutting edge research in the field, but so was Xerox PARC.
It's fair to say that there are multiple areas for AI leadership.
It is generally believed that:
(1) Those with the access to the best (which is not necessarily the most, but often believed to be) data have a strong starting point for training models; because of this Google, Facebook, and Microsoft have often been attributed to have this advantage due to the nature of their businesses.
(2) Inference/prediction at the edge, e.g. on-device, is believed to be the best point for applying those models; this can be for a variety of reasons, including latency and other costs associated with sending model input data from edge sensors/devices. Some applications are entirely impractical or likely impossible to achieve without conducting inference on-device. Privacy-preservation is also a property of this approach. Depending on how you want to view this, this property could be a core design principle or a side-effect. Apple's hardware ecosystem approach and marketshare (i.e. iPhones) provide a strong starting point for making the technology ubiquitous for consumer experiences.
There's a pop song playing, I kinda like it. I could pay attention to the lyrics and try to Google them or ask somebody that might know what it is... no need, I just look at my phone, "Break My Heart by Dua Lipa" it says on the lock screen. The phone will remember it heard this, so if I get home this evening and check what was that... oh, "Break My Heart by Dua Lipa".
Google builds a model and sends it to phones that opted in to enable this service. It's not large, and I actually don't know how often it's updated - every day? Every week? Every month? No clue. But the actual matching happens on the device, where it's most useful and least privacy invading.
There is still an attack vector, you can infer a bit from the "diff", but you probably can't tell exactly what the user wrote.
If you train on limited data, then your inferences will be of poor quality, even if they have low latency.
So in sum, it seems you can't be a leader in ML without both.
Being a leader in (1) does not mean you'll be good at (2), and vice versa.
There's also a difference between limited data and good enough data.
If you train on good enough data, you can have good enough models.
If people believe the focus of AI/ML should just be precision/recall, or other measures of accuracy, and having tons of data, they're missing many other areas and elements for what make AI/ML successful in application.
It seems like it's the sort of thing you do when you can, but often it comes as a second phase after getting it to work in the data center.
I didn't see anything in this article that was obviously unique to Apple.
Example: Apple Maps regularly thinks I need help finding my way home from locations close to where I live. Some basic practical intelligence would understand that I have visited these places before and there's a very good chance I already know my way home.
It would know that I would appreciate a route if I'm a couple hundred miles from home at a location I've never been to. But a shopping trip to the next town fifteen minutes away? Thanks, but no - that's Clippy-level "help."
IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.
Siri was a magnificent start but it seems Apple not only failed to build on it or take it further but actively scaled it back to save a few dollars in licensing fees.
Google is doing better by looking at specific applications - like machine translation. But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.
Also it works well with the “share ETA” feature where you can automatically share your ETA with family when you start directions home.
And anecdotally, my google home does the same thing at 5pm every day...but it gives me directions “home” from “work” when it is actually routing me to my old house which I moved from 6 months ago. My home address is updated and the old one removed, so
>IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.
This has been a meme for a few years now and I don't know what the basic quality that's declining is. Their development has arguably accelerated and it doesn't seem like it has any more bugs than normal. I agree that Siri has not been advanced as much as it should have been but it seems like they're working on it.
>But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.
I'm not sure what you mean by "general AI" but I think Apple has the best shot at it of any company working right now (unless you mean AGI).
Apple can come at this same problem from the hardware and software sides all at once, with their own internal dev cycles aligning with the yearly iOS/iPadOS software drops, and iPhone/iPad hardware drops timed to a month or two of each other, year after year. Sure, it’s buggy as hell, but it still works better than Android. One would hope so, since you can’t do proper sideloading, as Android natively supports.
Apple’s security argument is a childlike excuse for not doing one’s homework, not a reasonable justification for Apple unreasonably and intentionally feature-gating iOS and iPadOS devices. I’m an owner and I’m root. I fully control devices I own because that is a Natural Right of exclusive ownership of hardware devices under American First Sale Doctrine, which has also been recognized by the Library of Congress as Constitutionally-protected usage to preserve inalienable rights to nondiscrimination in computing devices.
So Apple can take a hike. We’re getting what we need, and we’re getting bugs fixed. Jailbreaking has surfaced more 0days than jailbreak devs squirrel away for the next rainy day after iOS updates. We need native full root support, full stop.
These research phones have landmines everywhere in the license agreements to get one, and I’m not re-buying an iPhone I already own to get half-assed fakeroot, especially when I’m already running as root on all the iPhones I ever bought. It’s not hard to avoid updates and save blobs, but should we really have to intentionally run old, known-insecure builds just to have r/w and root? Is this a mainframe? This is a joke.
What is Apple even arguing against? It’s not reality, that ought to be clear. They’re arguing for increased and continued profits for Apple, at the cost of our rights being trampled and violated, and were supposed to accept that they had our best interests at heart via increased security? Tell me another joke. Benjamin Franklin and me’ll be here all life.
The lack of free sideloading without a $99/year dev ticket is a joke. The scraps we get with 7 days between resigning apps is a joke. Devs are forced to abuse TestFlight to distribute apps which would otherwise be listed on App Store, if not for developers’ fears of App Store rejection, and potentially TestFlight revokes.
There’s gotta be a better way, but jailbreaking is the best we’ve got, for now. To that point, the Jailbreak Bot on Telegram is a public service, as is saurik himself and the entire reddit community r/jailbreak.
All that being said, Apple really is in a league of its own, with the market capitalization to back it up; Apple has features found in other companies while simultaneously being unlike every other company on Earth.
If there was a way to sideload apps then Amazon would probably create a 3rd party app store to compete with the main one. This would probably result in major apps moving over so they can abuse users in all kinds of ways banned from the app store.
I'm not really sure what the middle ground is here where iOS continues being the privacy OS while also letting the user do whatever they want.
The user shares the system with ... the apps.
Also googles keyboard swipe is WAY better than the iOS implementation
The frequency with which Siri shits its pants (can't help, asks me to excuse it being slow as it tries to set a timer, mishears me, etc.) is honestly remarkable.
(Not to mention the fact that my phone continues to alert me to text messages read on my Mac or iPad whole minutes ago.)
Apple is still working to overcome deep problems in both its cloud infrastructure and AI/ML. If they cannot be honest about this, they should not be dishonestly trying to present a picture of all being well.
> Did you read the article?
I really hope you have read the HN guidelines 
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
The problem with machine learning altering the behavior of a device is it shortcuts human learning. The human brain is very good at learning deep insights about things and its environment and alters its behavior accordingly.
If things change while we're learning about them, it confuses and upsets us. A dumb machine is much easier to use than a "smart" one.
Apples mantra seems to be to make everyone use the device as per their whims and fancy, and let the device figure out how to deal with it.
Ultimately what this leads to is a user experience lock in, where other devices that dont adapt feel clumsy or stupid.
Yes, apple’s strategy is more privacy protecting.
But holy hell, yes larger data sets are going to be more accurate and the resultant model doesn’t have to run in the cloud, it can run locally.
AI training in a data center with large data sets, ship the model to local devices to execute.
That is ALWAYS going to be better and more accurate than what Apple is doing.
It seems absurd to let Google Photos slurp up your data server side when your iPhone can do 80+% of the photo categorization automatically. It’s equally absurd that Apple has a glacial pace of change for the user side.
Photos... just sits there. To get a new photo to my desktop it's faster to open Google Photos on my desktop and download the photo from there than wait for both Photos.apps to finish syncing.
The uniformity of UX is worth the glacial rate, IMO. The more ScrollViews with arbitrarily laid out content (Photos seems to have these increasingly), the less intuitive I find apps. In Photos, I find it hard to get a sense for what menu-depth I'm at. Apple has a very logical master-detail layout across most apps, and I hope with Mac Catalyst that Apple continues to enforce consistent UI practices.
The more freedom developers are given to put any old layout, the more apps start feeling the way Windows Forms, WPF, and UWP feels on Windows; it's like you have to start thinking about "Okay, when was this app designed?" to figure out how to use it.
I think most people are probably fine with the tradeoff between convenience and features that centralization offers, so unfortunately people that aren't are stuck with more niche products (or Apple, I suppose, although I don't think i-cloud is end to end encrypted?). I say they're stuck with niche products because I don't think it makes sense to try and offer a slider between convenience and privacy in one product, since the interface and feature sets would be so different.
JG, I don’t think that’s your “biggest problem” - Siri is. Your privacy centric on-device strategy limits your view of user feedback, Google gets a lot of shit wrong but they know how to transmit user data and understand their feedback.
If you have to explain the customers that it’s ML based, that’s the same as asking for the customers to understand it’s unreliability. And unreliable features are worse than no features, and that’s why nobody uses Siri, Alexa, Google Assistant except for a few reliably-working requests.
I have 3 languages on my keyboard and sometimes it suggests an autocomplete in the wrong language.
It's a hard language and not as good covered usually with ML as English or German is.
When you have to say it yourself you probably aren't.
What is there in AI that Google doesn't beat Apple at?
You have to get that data somewhere.
> What is there in AI that Google doesn't beat Apple at?
Excellent Question. Unless we get Apple's equivalent of OpenAI, DeepMind or FAIR then the answer is always nothing.
Everything uses AI. A program with an if statement is AI.
I think your own point explains why people are more than happy to use a more expensive iPhone despite technological differences.
I'm currently developing with a relatively new android phone, but I'd never use it as an actual day-to-day phone in lieu of my circa 2014 iPhone (believe me I've tried).
The extent to which I prefer the overall iPhone experience far outweighs any technological upper-hand that the android has. In the scheme of things, phone technology has barely shifted in 6 years; a supposed 2 year difference is utterly negligible.
Side note: what's the deal with the Google Play store??! I tried to download facebook messenger a few days ago for testing, and accidentally downloaded an entirely different app with a very similar name and icon as it somehow managed to occupy the top listing (ad?).
Seems obscene that it's so easy to mount a phishing attack against android users.