Hacker News new | past | comments | ask | show | jobs | submit login
Why Apple believes it’s an AI leader–and why it says critics have it all wrong (arstechnica.com)
175 points by arunbahl on Aug 8, 2020 | hide | past | favorite | 208 comments

Setting aside all the other work Apple is doing in this area, can someone explain why iMessage autocorrect is unacceptably bad (to put it diplomatically) in the era of weapons-grade language models?

Its behavior is totally bizarre. It’s like an underpowered chess engine that makes flagrant blunders: capitalizing random words in the middle of a sentence (rock -> Rock), the context sensitivity of an actual rock, forcing the same correction multiple times (i.e. you go back to fix its error, and it defiantly repeats it), contraction mixups, blindness to off-by-one-keystroke errors (consentuallt -> <no action>), and of course the occasional random word substitution. Only martial artists want to duck people (and it—no joke—just now substituted [duck -> suck]. You had one job autocorrect.)

Is this just me? Is it actually 2020? What is going on?

The one the drives me mad the most is when I type a word which it incorrectly autocorrects to a proper noun (which gets capitalised), I then use backspace to delete that word and type the - completely different - correct one, but it insists on upper-casing the first letter to match the original casing of the incorrect autocorrect. It really should keep track of the fact that it introduces the case-change and invalidate it when the automated action itself is invalidated. This happens frequently to me.

I can never make iOS write “Hong Kong” when swipe-typing; tried setting up forced text replacement entry (kong->Kong) to zero effect. It is always “Hong kong”.

Similarly I can never get iOS to type “Kings Cross”, it always forces “Kong’s Cross” even when I delete the word and retype.

PS: iOS just did this twice while typing this reply.

iOS for me always autocorrects king -> kong, and I don't understand why. What even is "kong," and who says it more often than "king"??

I think you also have to signal iMessages when it got it right by sending the message. I tried a lot of times teaching it how to write Hong Kong and when it got it right, I sent the message. From that point onwards it always writes Hong Kong correctly.

iOS always capitalizes Hong Kong properly for me, but I write that so much that I guess I could have trained it.

Edit: I also bought all of my iOS devices in Hong Kong so, there's that.

I corrected it hundreds of times so it had plenty to learn from. If you don’t have this bug when swipe-typing then I guess it’s much less likely to be fixed for me in an update, oh well.

Bought my iPhone in HK as well, double physical SIM slot is a must have!

Most other devices I bought elsewhere, except for those I was dead sure I’m going to keep. As far as I can tell HK is the only place in the world where Apple retail has a no returns policy.

> As far as I can tell HK is the only place in the world where Apple retail has a no returns policy.

Yeah, we can thank the x-border gray market for that. In the beginning it was...flexible, in that if you obviously weren’t a smuggler they’d quietly accept the return, but I haven’t needed to test that in a while.

Weird - just tried it and ir capitalised it correctly. I'm on the ios14 public beta, though.

That’s good news! I’m on iOS 13, wasn’t able to find a way to install the beta yet (not a registered iOS developer myself).

It seems to conflate people's names with words in a sentence, e.g. "I'm crossing the Bridges." Could just be confirmation bias, but whenever I notice that problem it tends to be a name in my address book.

Reminds me of how it capitalises "El Niño" as if I was talking about that climate thing, when I'm just trying to say "the kid" in Spanish.

Interesting. I was going to suggest that was from using the English rather than the Spanish keyboard but I just saw my iPhone do that on the latter as well.

I know exactly what you mean! This is even more annoying in German, where every noun is capitalised.

Why would it be more annoying? Seems like it would be harder to notice this error in German.

It's more annoying because when you type a non-noun there is a higher chance that it gets mistakenly autocorrected to something capitalised (as every noun in German is capitalised versus only proper nouns in most other languages). I hate this flaw of iOS as well.

Got it

Many teams at Apple are pure junk. When Steve Jobs basically fired iCloud folks in public, lot of people thought he went too far but little they knew how bad that team really was and still is. These folks are now center of the Apple universe but can't build a simple app that can efficiently sync the photos on Windows. Other teams are iTunes (one of the worst widely deployed Windows app ever), Apple Music (just look at the obscene UX - in 2020), App Store (good luck deleting installed app if you have too many of them - in 2020) and of course auto-correct (fails to fix spellings in 2020 that MS Word 2000 used to fix!). Surprisingly these folks have kept up with mediocracy regardless of otherwise stringent culture at Apple.

As incredible as it is, the problems extend way beyond just syncing on Windows.

Sync works completely randomly on mac between the iPhone and and OSX, - you often have to wait "a long time", and since Apple also has the completely idiotic concept of "everything has to be magic" you never get an estimate, or an error message, or a progress bar, so like everything on a mac, you have to just fiddle, disconnect, reconnect, wait, and just hope and pray that things will work out.

And then we haven't even touched on the their productivity apps like Numbers and Pages which for a long time had features in the templates which weren't available in new documents, so you had to start from a template, completely insane.

Still use Mac and like it for the most part compared at least to Windows which can't even get search right without showing randoms ads and internet results to you - another pretty crazy thing to push to production for a billion people.

Drives me crazy how unreliable all file sharing is on Apple devices. Like you said you can’t rely on iCloud sync, but half the time even AirDrop doesn’t work. Two devices sitting next to each other and AirDrop shows up on one but not the other. I put them both in Airplane mode and then back on, and now neither of them shows up. Oh wait now it works on one again, but again not the one I need to send from.

I actually meant airdrop in my comment above (although it does apply to airplay too) it applies to everything Apple does really.

Airplay is in the same category - it either works and is great, or just dead in the water and it's tough luck.

I once mentioned the rapper xxxtentacion in a text message. Now whenever I sign of an email or text to my wife with an ‘x’ iOS decides that what I actually meant to do was write ‘xxxtentacion’ and I’ve normally pressed send before I notice. After a couple of years now this is just how I talk to my wife.

For me IOS consistently suggests phrases I've used with my wife in the context of my colleagues despite me really never having addressed them "Hi honey" or "Hi darling" ... etc.

It's just pathetic nonsense to congratulate anyone on this level of nonsense.

One of my exes has a name that is very close to the word “the.” Her name has embarrassingly showed up, capitalized and everything, in way too many conversations. Apple has permanently turned me into that guy that can’t stop talking about his ex.

I’m going to blame typing ‘sign of’ instead of ‘sign off’ on autocorrect.

You can go to the reset menu in settings, and reset your keyboard dictionary.

It resets the entire thing though.

I recently discovered a neat trick for fixing the duck problem, that is, adding swear words to the iOS dictionary: add a contact with the chosen words in the name fields. E.g. I have a contact named "shit shitty fuck fuckin fucking oh shit"

I find it both amusing and ridiculous that you have to go to such great lengths to be able to type a good swear word on the iphone.

This sort of 'nudge' is stupidly passive aggressive and frankly condescending and infantilizing.

Companies should really just grow the fuck up and stop doing stupid, cutsey things like pretending adults really just typo a reference to a bird regularly.

St least Gboard let's you turn an option to allow "potentially offensive words" or something like that. I'm surprised it doesn't seem to exist on apple devices

The reason iOS type correction is so piss-poor is, to me at least, because it is not a 'typing' correction mechanism, it is pronunciation correction system. It bases it's auto replace on what the letters appear to be saying or sound like, and NOT, as it should, which would be simply and effectively based on typos on a qwerty keyboard. That is why. And it is like this because Apple want to steal your voice and 'preferred' you to talk, not type. Add to this no 'rude' words, even though you're an adult. Duck Apple for this alone!

I would leave iOS solely for this reason if it were not for the fact that Google are as bad.

The T9 input system is better.

The weird part is in my experience their "swift" input works miles better than the regular tapping. It's truly odd. Tapping I have to correct the phone all the time, while using swift (with the same keyboard!) I quite rarely do.

Is 'swift' 'swipe to type'? Yes, I find swipe to type surprisingly more efficient than the 21st century's first attempt of tapping on glass.

Yes, it's the swiping keyboard, sorry didn't quite remember the term.

I got it wrong too. It's Slide to Type. But we both knew what each other meant, that's the main thing.

Is it actually 2020?

Yes, but if it's not even possible to get auto-rotate to function correctly most of the time (to the point it becomes funny, countless are the times I've seen people yell at their device because either it doesn't rotate when wanted or rotates when not wanted), how do you expect something way more complicated to function appropriately? Only partially kidding here; this isn't criticism of the fact autocorrect doesn't work as good as it should, it's just that it's really hard to solve, but the point of failure imo is that there are other ways not involving AI or ML which might, as long as the former doesn't work better, actually be better suited and lead to results faster. E.g. something as simple as just adding anything ever typed to the dictionary and sort by number of usages - eventually getting rid of items with only one usage ever assuming they were typos or so. Which is roughly how phones used to do it pre-full-keyboard-era, I assume, and at least for me that actually worked pretty well and definitely with less mistakes. Maybe not as fast though. But I'm personally not interested in trading correctness for speed.

No kidding about one out of every 10 times I open my iPad Pro connected to the Magic Keyboard, it turns on the screen in portrait mode. It really should be smart enough to know you can’t use a Magic Keyboard in portrait mode. So now I have to tip my entire laptop on its side to get it to switch back and I look like a dumbass because Apple can’t fix their rotation.

To change rotation on iPad and keep it locked you have to do 7 steps:

-swipe down the settings panel

-unlock rotation

-reorient device

-swipe up the settings panel

-let it reorient

-swipe down the settings panel

-lock rotation

The only think I ever want to rotate is videos.

I NEVER want to rotate the home page. NEVER

This also happens to me on Android. Working in ML, this is something to point to when people hype it up. We've come a long way, but we still got a long way to go.

As for specific errors, does rock capitalize when "the" is before it? (The Rock) The models learn from how users write. But when you think about it, it is actually fairly difficult to predict what words are going to be, you can test it with your friends. Have them create a sentence and feed you one word at a time. The better you know the person the better you'll do, but I'm guessing you still won't be that good.

The most disruptive errors are those that could seemingly be cleaned up with some simple rules (e.g. if the word is in the dictionary, then don’t change it). This doesn’t seem like a “state of ML” issue—the Gmail text completion is annoyingly good, and you-know-who can compose soliloquies and smut with uncanny skill.

And no, I don’t often discuss The Rock, but if I did (“Do you smell what the r...”) I'd expect it to finish the alley-oop. I do not want to go Rock climbing.

iMessage is software that is used <Carl Sagan voice> billions and billions of times a day, and half the time it feels like verbal jiu-jitsu (“jiu-hursuit” according to autocorrect just now) with Dory the fish.

It isn't just you that affects the learning.

> if the word is in the dictionary, then don’t change it

Actually the thing that bugs me the most is that when there's a word that is in the dictionary that's long and complex and it doesn't suggest it, even when I have the right word. It just isn't underlined red. Or a one off error with these words and it has no idea what I'm trying to spell.

They have a long way to go. These systems aren't just ML btw.

Predicting subsequent words is hard for humans, but correcting typos is easy. I've turned off autocorrect completely because my typos are always immediately decodable to their human audience, moreso than Apple's seemingly random modifications.

Could autocorrect not wait until it sees more of the sentence?

Android autocorrect does this and it's maddening, because it's very bad at it, and now you don't notice the word it changed because you're looking at the subsequent one.

Having to monitor several words back to fix errors that the system adds is exhausting. I wish I could turn it all off, or at least revert to the system from the early days of iOS.

The swipe keyboard is atrocious too, I simply cannot swipe the word ‘you’ with any reliability. It becomes tut, or töre. Töre. Instead of the second most common pronoun in the entire English language.

Somehow swiping keyboards have gotten worse since the original launch of "swype" a decade ago. It's gotten terrible after they were bought by Nuance, and none of the alternatives match up to what it once was.

I'm bilingual and write approximately the same amount of messages in both languages using iOS. Autocorrect always suggests words from BOTH languages mid sentence. Like what the hell, they should be able to infer which language I'm using just from the first word.

I feel your pain.

On a day to day basis I do use 4 different languages, sometimes even mixing them in a single conversation (i.e the other person is bi/trilingual too). For me the autocorrection is only a distant dream and will probably remain so for a long time.

That sort of makes sense. I am not sure of it's prevalence, but at least for Indians, it's pretty common to code switch while writing (even more so while speaking).

Yes, I only speak one language fluently but I'll still arbitrarily code switch sometimes.

Human language is actually just one vast system, it isn't best understood as a bunch of nicely compartmentalized independent languages. So in a way it would actually be more wrong for an AI model to insist that "Beacoup money" isn't a reasonable thing for the user to want to write, just because those two vocabulary words don't co-exist in a single defined language. The user may or may not know that, but they certainly don't care.

This also happens on Android if you have several input languages enabled.

It looks like it uses all the dictionaries so suggestions appear from all languages for every word you're typing.

The Swype app used to be so much better at this, but Google had to go and buy it, retire the product and absorb none of its algorithms into Gboard. Such a loss.

I write three different languages on a daily basis, and I've always had autocorrection switched off because it has never worked for me. Also, how is an autocorrect supposed to know whether you are writing a technical term / acronym / etc and not an actual word?

My favourite iMessage ‘feature’ is how now and then it picks a real word and refuses to let me use it without autocorrecting it to something else.

Ah, my favorite is autocorrect changing `its` to `it's`. Makes my grammatically correct sentences incorrect :-)

Oh this one gets on my nerves so much! And when you go back to correct it, it still tries to “correct” you the second time around.

Yeah I recently moved to Menlo Park and for some reason after I type “Menlo” (which it correctly capitalized automatically), it always suggests random words like “this” or “it”.

Sure, some people would say “I went to Menlo for lunch today”, but it should always have the option of “Park” (which when I type it, it always knows to capitalize).

> ... in the era of weapons-grade language models?

Just an aside: Although common perception is that "weapons-grade" refers to the highest quality, in reality it refers to the lowest-possible quality that meets the bare minimum requirements!

So? That’s what every standard means: “anything just barely over the minimum that the standard specifies”. That can definitely be wonderful and impressive if the standard is high to begin with.

You might as well refer to the bronze medal as “just barely enough to be in the awards ceremony”. Is that supposed to be clever somehow?

> Is that supposed to be clever somehow?

I thought the parent comment was mildly interesting. There's no need to be so combative.

Pointing out tautologies that only sound convincing because of misleading phrases, isn't interesting.

You aren't the gatekeeper of what can and can't be considered interesting.

FWIW, your comment above (about standards) was also mildly interesting to me. I hadn't really given the phrase much thought.

There was no need to be rude about it though.

I'm not the gatekeeper, no, but I can alert you to when you're being duped by someone parroting nonsense.

Great! Just try to do it without being condescending (especially when the comment you're replying to is completely innocuous).

Those weapons grade language models take hundreds of gigabytes of storage. But yeah, after downsizing to a few dozen MB they should still be better than what's there currently.

I've recently switched back to iOS and hoped this behaviour was temporary as it got to learn my swiping style. Gboard on Android is almost perfect.

There's also a Gboard for iOS. Can't compare it with the Android version, but it is much, much better than the Apple default keyboard for swipe typing. Has become my default keyboard after Swype was discontinued, with a short stint in between into Apple's standard keyboard right after they added swiping.

To be fair, the android keyboards (like Gboard, googles own) are also bad. It's a tough problem for sure, but they haven't gotten better for the last 6-7 years at least, in my experience, and that is the point of the parent post I guess.

Gboard is not perfect and shows some erratic behavior described by the parent poster, but it isn't as bad as Apple's keyboard in my experience.

However, SwiftKey might be more conservative and feels a bit slower at times, but its autocorrect and prediction do a really good and consistent job.

The worst thing for me is that I frequently miss the space character and type a period instead, and Gboard can't understand that it's a mistake AND won't autocorrect and suggest words for what I'm typing after that period.

Is this true? Lately I noticed a change. Autocorrect now suggests based on previous words. I am not sure this use to be the case.

For example if I type "this is not the" it suggests the next word might be "case".

This whole thread is why I always switch off auto-correct on any device. Mixing two languages is horrible. Unusual words / slang / abbreviations / street names / any unknown word throws it off. It's just painful to correct auto-correct, and the amount of facepalming involved.

Plus, I feel like my ability to type things correctly might suffer from crutches like this.

The same goes for voice recognition, when it tries to squeeze a specific name into a set of common words. It's just bizarre. In the past, when I tried the Google voice assistant or Siri, I always had to bring my best american accent (almost comical), otherwise they would often not understand.

“may” -> “May” will be the death of me.

Compare Google and Apple's privacy policies. Autocorrection and autocompletion is far easier if you're willing to hoover up vast amounts of user data to train your models and provide personalisation. It's a similar story with Siri and Google Assistant.

They have been actively moving to federated learning and analytics so far less data leaves your phone:


Are you saying this is an unsolvable conundrum? You can't have good autocorrection with massive privacy violations?

I'm not sure I believe that.

> I'm not sure I believe that.

So, what would make you believe it?

Someone making a cogent argument in favour of it?

Same here, it’s painful and actually seems to get worse the more I use my phone. I’m pretty sure it capitalises words found in your contacts - fine for surnames, very annoying for Home, Bank etc.

> What is going on?

I have no idea what is going on, but the answer is installing SwiftKey

So now everything you type is also sent to Swiftkey servers ...

Fwiw Swiftkey was a paid app a few years ago. Then they became free. How do they still earn money to keep the company running ?

(I always had an Android phone but I assume it's also free on iOS)

IIRC that's actually a setting you can opt out of, though the wording in the app settings is pretty ambiguous

Swiftkey was bought by Microsoft, they don't need revenue anymore. Or maybe they do?

Not only that but it got worse after iOS10/11.

Same experience with gboard on iOS.

I’ve found it’s much easier to disable autocorrect altogether and simply take responsibility for what you type. I mean that with no snark; that’s actually how I view it. Over time you definitely improve at typing if you disable it.

I have rather petite hands and fingers, I should note.

I’ve been typing on iOS devices for more than ten years and I don’t think I’ve ever typed a single sentence without errors (even after autocorrect, which thought I typed decides instead of devices up there).

I find it’s easier to mentally parse typos than it is to parse the wrong autocorrect word.

> Is it actually 2020?

I think you found the problem

You can't be AI leader when every AI leader is staying away from you by 10ft pole. When Ian Goodfellow joined Apple, there was literally rain of criticism on him from ending his career as researcher to bowing down to money. I don't know of any researcher who does want to have continued research career willing to join Apple. They simply don't allow that kind of freedom or publishing results. While Apple has some strong points in imaging (thanks to their 1000+ people team) and wrist rejection, virtually everything else they do that requires AI sucks and lags behind competition, including, Siri, maps, autocorrect, spell check, iCloud, search, calendar, spam detection, recommendations etc. For most of these things, most people don't even count them as real competition. Google on the other hand is able to achieve very competitive performance through software and AI without such large team on phone camera and frankly quite pathetic hardware.

These kind of reality distortion pieces aren't going to help them. They have $100+B, in cash, they can easily start reputable open research lab that can rival FAIR, OpenAI or DeepMind. Even a smaller companies like Intel and Adobe is starting to realize that this is necessary so they can tap into expertise on demand. At minimum that will be totally worth for a talent pipeline that can be motivated to do "rotation" or "sabbaticals" into product groups from an open lab.

It's a shame that Apple doesn't have an industrial research lab; it certainly has the funding to create a lab that would rival Microsoft Research or perhaps even legendary labs like Xerox PARC and Bell Labs. Apple used to have a lab: during the "interregnum" years of 1985 to 1997, Apple had a fantastic research group called the Advanced Technology Group, led by the late Larry Tesler. In some ways this group was a sort of spiritual successor to Xerox PARC; Larry Tesler was ex-Xerox, and the legendary Alan Kay (of Smalltalk fame) and Don Norman (who wasn't from Xerox PARC but who is a legend in usability) were involved in this group. This group worked on many interesting and important technologies, such as Quicktime, AppleScript, OpenDoc, HyperCard, speech recognition, and more. Even though the Dylan project did not come from the Advanced Technology Group, it is another example of interesting work that came out of Apple during this time period. Even though commercially Apple struggled during the latter half of the interregnum, Apple created some amazing technologies during this time period.

Of course, the Sculley/Spindler/Amelio Apple seemed to be far more open regarding research than the Jobs- and Cook-era Apple. Jobs closed down the labs in 1997 and help institute Apple's famous culture of secrecy that persists today.

Has the culture of secrecy benefited them at all since the launch of the iPhone?

I think the answer to that question is yes.

I'll argue that AI leadership is not limited to just research, and an under appreciated aspect of leadership is making actual things useful for people (e.g. applying AI/ML technology to problems, and making those solutions common or ubiquitous).

It would be hard to be leader without having leaders in the field at your disposal. There are literally 20,000 papers getting published each year. While stunning gains are being reported every month, it would be very hard for you to identify good implementable papers with state of the art results unless you are active in the field. Many of the techniques are amalgamation of other dozen techniques. So its even more harder to add/subtract from a given paper to productize the results. Keeping up with these papers and techniques is literally a full time job which is why you need researchers at your disposal if you are working on these problems. Unfortunately, researchers are almost never willing to just give up research and become your next engineering dude. They are in the field because they love research, freedom and can publish. So the viable way is to have a research lab which you can tap into on demand and have options for rotations. Many people believed that this can be done without having your own research lab but that doesn't often work out. If you approach a researcher in academia for consulting, you have to count on them being free from academic duty, grad students and other commitments. They will typically do 80/20. If you have your own research lab, you can turn this around to 20/80.

There's more to leadership than research and pushing state of the art in papers.

There's also a lot more to making useful AI/ML-powered technology than having good models.

Defining and limiting the field to just research and models is a common pitfall I've seen.

Apple has fallen way behind in software/apps, and maybe they’re ok with it. The hardware and OS are good, and some staple apps for creators are good. But at everything else they seem really lethargic or absent. For example, as a web developer Apple has had zero impact on my tooling and knowledge. But their laptop is nice and I have one, which may be good enough for them.

Apple has had some impact for me on web. I can ship an ogg/webm universally in <video> and <audio> tags, but Safari on iOS is the only one that demands a mp4.

FWIW, I used a MacBook Pro for work (2018) and the wrist rejection was bad enough to make it unusable. Whenever I typed, it would constantly move the cursor around, usually with a click that would change where my text was going, no matter what settings I adjusted.

It was less of an issue with the older MacBook airs, but the Pros now have a giant oversized trackpad that you have to touch when typing unless you hover.

What is wrist rejection ?

Recognizing to disregard input from your wrist/palm when it's resting on the iPad surface whilst one is using Apple Pencil.

...Psychosomatic cognitive dissonance?

I don't own or use Apple products so this takes more explaining for me.

You're typing on your laptop and you don't want your wrist/palm to accidentally trigger anything on your trackpad, so you employ wrist rejection.

That’s an interesting take. Does Apple really lag? I think quote-unquote AI in SV has pretty much stalled in the past 3-4 years. There’s been a lot of volume in ML but nothing groundbreaking.

That's not true at all. AI is everywhere. When Gmail completes your sentence, auto-playlist gets created, camera detects a pet, you take night shot of a city, grammar suggestions are shown, wrong keystrokes auto-corrects, voice mail is transcribed, phone call is responded by quick text, your sleep is analysed, heart rate is monitored on runs, news articles gets recommended, spam email gets filtered, device idle detected... I can go on.

Most of these problems cannot be simply solved by just throwing in best engineers. No amount of classical algorithms you learned as CS major is going to help you implement the best solutions for these problems. The state of art solutions to these problems requires intense narrowly focused researchers who have studied these problems for many years, knows which 10% of the papers even worth looking at and pro/cons of different techniques. Something as benign as running neural net on phone hardware is intensely researched subject and your implementation can be literally 10X to 100X better at speed and power consumption if you have kept up with the field.

None of those things was solved in the last 3-4 years though.

Some might now be done via AI, but is it done any noteceably better than 3-4 years ago? I certainly don't trust my camera to adjust itself, or the sleep analysis to be any good, or the article recommendation to not recommend based on outrage, or the spam filter to work at all.

That's the whole point. If you are still using classical solutions for these things that are years old, you are far behind best experience and the competition. As an example, noise removal in low light images has advanced tremendously each year with stunning gains. Same goes for grammar correction and auto-correct. You can see the difference in Siri and Google Assistant that is literally an order of magnitude. Siri even has trouble doing proper voice recognition and as Apple does not even know how to do search, its question answering skills shines only for highly curated tasks. First thing I've always done on my iPhones is to turn it off. However, Google had been amazingly improving these stuff every year. The end result is that while I do use iPhone, I tend to use mostly Google services and when something is only available on Android I get a tremendous itch to switch.

It sounds like you are just advocating the corporate tech transfer model of advanced research in a nascent field, but maybe Apple is philosophically opposed to doing that. Indeed, it's not that they don't have funds for scientific research. It's easy to say Siri is inferior or whatever but that contains such hidden assumptions about what really is valuable for consumer tech.

> You can see the difference in Siri and Google Assistant that is literally an order of magnitude.

How is this measured?

To compare that list to apple, my Mac often (but not predictably) complains that things like "the" isn't a word. Literally worse than a lookup against /use/dict.

My 2c is that goodfellow is probably working on their self-driving project.

i dont have any inside insight but id be surprised if thats the case. goodfellow's line of research has been adversarial learning [1].

[1] https://scholar.google.ca/citations?hl=en&user=iYN86KEAAAAJ&...

"Google is an amazing company, and there's some really great technologists working there," he said. "But fundamentally, their business model is different and they're not known for shipping consumer experiences that are used by hundreds of millions of people."

What does he mean by this? Google search and Android are both used by more than a billion people. YouTube over 2 billion. These are all bigger than any Apple product. If anything, Google is known for shipping consumer experiences that reach a large number of people. Apple by contrast is known for its high-quality, high-price consumer experiences that reach fewer people.

He's just being subtle and putting it short of Apple's usual marketing speak - "We chose not to sell Data" perhaps as a goodwill to his former company.


>they're not known for shipping consumer experiences

IMO, difference is not in selling 'consumer experiences' but selling 'consumer aspirations', which you rightly pointed out is not for many.

I would say it slightly different:

Google is known for shipping experiences that are used by hundreds of millions of people.

Unfortunately that model means they have to ignore everyone individually.

I thought that was odd, too. My best guess is that he’s emphasizing “known” as in notorious. Apple is “that company that ships experiences” whereas google is that company that records you while you use their apps or whatever.

Is that even true? I mean on HN sure, but in the general population ... I have no idea what's the perception of Google.

(Googler here) I feel article presents a false dichotomy/view (imho borderline disingenuous impression) that Apple is doing on-device, and everyone else is doing on-cloud.

Android also does on-device. A lot of the time, things start out on-cloud, until whatever machine learning model can be shrunk to run on-device at the right performance. So you see for example, text-to-speech and speech-to-text start out as cloud calls, and now they're on-device. Google Translate ran in the cloud, but now for some languages, it happens on-device.

Things like Google's "Live Transcription/Caption" on Android wouldn't work if it wasn't on-device.

Apple similarly went to the cloud for Siri speech recognition and TTS until they could run it locally.

For other things which need large models, there is Federated Learning to preserve privacy. Google Keyboard has been using Federated Learning for some time now.

Apple's AI work is decent for what it is, and their AI hardware is fine too, but so much of this article is just weird.

It starts by talking about how until recently Apple wasn't doing AI work where it needed to be. Then there's the weird excerpt where he claims Google is “not known for shipping consumer experiences that are used by hundreds of millions of people.” The author then raises the legitimate point that AI benefits from having lots of data to train on, but then quotes an answer by Giannandrea to a different question, which includes him stating that bigger models aren't more accurate than smaller ones. The point that on-device inference is more responsive is valid but not unique to Apple; the article says “Android phones don't do nearly as wide an array of machine learning tasks locally”, but I don't think this is true.

Was it written by gpt3?

I found the conflation between training and running models very confusing or even disingenuous. The whole thing about evaluating a model frame by frame with low latency local on the device, that is a good point, but then having that as a follow-up to why they don’t train the models with lots of data on the server ... well, I don’t really know what to make of that.

Apple’s chips are way better so whether they’re doing more locally or not they have the headroom to do a lot more.

I would agree with that characterization of Google. They have a very small handful of successes and an enormous pile of failures that they’ve discarded. They give every impression of not really knowing what to do.

I think Giannandrea was also referring specifically to the kinds of experiences Apple can provide on the iOS and iPadOS platform because of their high end hardware and deep software integration. Google has yet to replicate that, and even seem to be bored with Android recently: https://daringfireball.net/linked/2020/08/05/wear-os-music

Apple's AI accelerator is meaningfully better than the Snapdragon 855's, but way behind the 865's.

> I would agree with that characterization of Google.

No. It's fine to dislike Google's stuff, but words have meanings. The claim that Google doesn't ship ‘consumer experiences’ to hundreds of millions of people is objectively false.

The 865 is a generation newer than the chips Apple has released. I'll wait and see what happens come September and October of this year.

>No. It's fine to dislike Google's stuff, but words have meanings. The claim that Google doesn't ship ‘consumer experiences’ to hundreds of millions of people is objectively false.

Sure, they have some experience. But their core business is not shipping experiences that consumers pay them for. That is all.

What AI failures have Google had? Google Home is much better than Siri or Alexa. The camera feature in Google Translate works really well. Their new auto subtitles on Android work really well. Searching photos for objects works really well.

The only thing I can think of is that think they demoed that would call restaurants to book them for you, but that was clearly highly experimental, and it's not like Apple has done that.

None of those are at risk of taking over as a revenue stream from ads on search and YouTube, is my point. You're right that it's cool stuff but right now it is being subsidized by a staggeringly more profitable business. I'm skeptical of their ability to pivot away from the dependency on search, which is good enough, obviously, but they seem to have higher aspirations of which they consistently fall short.

I really do want to see them succeed I guess I'm just overall disappointed with where they're at right now and I think they've somewhat overhyped it. I'm not a voice assistants person, I disable Siri personally.

Ok what's your point? Does Apple's search engine work for this query?

This piece is interesting because Apple was saying this all along but no one really believed them because it sounded like excuse making. But here JG is basically saying the same thing:

>Yes, I understand this perception of bigger models in data centers somehow are more accurate, but it's actually wrong. It's actually technically wrong. It's better to run the model close to the data, rather than moving the data around. And whether that's location data—like what are you doing— [or] exercise data—what's the accelerometer doing in your phone—it's just better to be close to the source of the data, and so it's also privacy preserving.

A few years ago was when this narrative was at its peak and I believe it was mostly because Google (and to a lesser extent Facebook) were talking about machine learning and AI in basically every public communication. What came of it? Were all the people who claimed Apple's privacy stance would leave them in the dust proven right? For one, being "good at machine learning" is like saying you're good at database technology. It's a building block, not a product. Maybe Google and Facebook are doing cutting edge research in the field, but so was Xerox PARC.

When it comes to machine learning, the subtlety here is that there are at least two sides or facets to machine learning: (1) training and (2) inference.

It's fair to say that there are multiple areas for AI leadership.

It is generally believed that:

Model Creation

(1) Those with the access to the best (which is not necessarily the most, but often believed to be) data have a strong starting point for training models; because of this Google, Facebook, and Microsoft have often been attributed to have this advantage due to the nature of their businesses.

Model Application

(2) Inference/prediction at the edge, e.g. on-device, is believed to be the best point for applying those models; this can be for a variety of reasons, including latency and other costs associated with sending model input data from edge sensors/devices. Some applications are entirely impractical or likely impossible to achieve without conducting inference on-device. Privacy-preservation is also a property of this approach. Depending on how you want to view this, this property could be a core design principle or a side-effect. Apple's hardware ecosystem approach and marketshare (i.e. iPhones) provide a strong starting point for making the technology ubiquitous for consumer experiences.

Re: Prediction at the edge, I would think that it's better if there aren't going to be any updates to the model. Or if internet access is limited. Correct me if I'm wrong, but most of the ML inference actually takes place on the cloud nowadays, not on-device.

Here's a nice example that like a lot of the best things is always passively present. My Pixel 2 knows what music it hears.

There's a pop song playing, I kinda like it. I could pay attention to the lyrics and try to Google them or ask somebody that might know what it is... no need, I just look at my phone, "Break My Heart by Dua Lipa" it says on the lock screen. The phone will remember it heard this, so if I get home this evening and check what was that... oh, "Break My Heart by Dua Lipa".

Google builds a model and sends it to phones that opted in to enable this service. It's not large, and I actually don't know how often it's updated - every day? Every week? Every month? No clue. But the actual matching happens on the device, where it's most useful and least privacy invading.

IIRC, for example phone keyboards use federated learning, where the model is further trained locally. You don't want to send every word the user types, for privacy reasons and others. Some kind of "diff" of the local model can then be sent to the cloud at some time to add up to the base model.

There is still an attack vector, you can infer a bit from the "diff", but you probably can't tell exactly what the user wrote.

But can you lead in (2) without being good at (1)?

If you train on limited data, then your inferences will be of poor quality, even if they have low latency.

So in sum, it seems you can't be a leader in ML without both.

You can lead in (2) with good enough (1).

Being a leader in (1) does not mean you'll be good at (2), and vice versa.

There's also a difference between limited data and good enough data.

If you train on good enough data, you can have good enough models.

If people believe the focus of AI/ML should just be precision/recall, or other measures of accuracy, and having tons of data, they're missing many other areas and elements for what make AI/ML successful in application.

It's not my area, but I read Google's AI blog and they seem to be doing machine learning research that ends up in its phones, such as various camera improvements and offline Google Translate.

It seems like it's the sort of thing you do when you can, but often it comes as a second phase after getting it to work in the data center.

I didn't see anything in this article that was obviously unique to Apple.

I think a lot of people have underrated the benefits Apple reaps in this area from being so far ahead in chip design and controlling their hardware. As regards to successfully productizing ML I’ve recently become convinced that Apple has been more successful at it than Google so far. Translation was already well within Google’s wheelhouse and they had been doing it for years, and their work with cameras is going to inevitably be hindered by the fact that they can’t depend on certain specs the way Apple can. I suspect the Pixel exists at least partially to prove that they understand those benefits.

Apple is bad at AI because Apple is bad at software, and increasingly bad at common sense. Being good at hardware won't compensate for that.

Example: Apple Maps regularly thinks I need help finding my way home from locations close to where I live. Some basic practical intelligence would understand that I have visited these places before and there's a very good chance I already know my way home.

It would know that I would appreciate a route if I'm a couple hundred miles from home at a location I've never been to. But a shopping trip to the next town fifteen minutes away? Thanks, but no - that's Clippy-level "help."

IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.

Siri was a magnificent start but it seems Apple not only failed to build on it or take it further but actively scaled it back to save a few dollars in licensing fees.

Google is doing better by looking at specific applications - like machine translation. But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.

I’m not so sure your example is the best at “bad common sense” - I can see a ton of use cases where this is useful for a lot of users. For example, you live in the suburbs next to a highway, and your route home is usually 15 min. You get the notification and see today it’s 30 min because there is an accident. Instead you take the back roads or wait at work a little longer for it to clear. The directions aren’t the value, the time estimates are because in your mind you can’t know current traffic conditions of the highway. It’s a replacement for the 4:30 PM traffic updates on the radio station.

Also it works well with the “share ETA” feature where you can automatically share your ETA with family when you start directions home.

And anecdotally, my google home does the same thing at 5pm every day...but it gives me directions “home” from “work” when it is actually routing me to my old house which I moved from 6 months ago. My home address is updated and the old one removed, so

Yeah, I just completely disagree. I agree that Maps' machine learning isn't very good but I've been completely unimpressed with Google's as well.

>IMO the company is stuck in the past, its software pipeline is so badly broken the quality of the basic OS products is visibly declining, and it's unprepared for the next generation of integrated and invisible AI applications.

This has been a meme for a few years now and I don't know what the basic quality that's declining is. Their development has arguably accelerated and it doesn't seem like it has any more bugs than normal. I agree that Siri has not been advanced as much as it should have been but it seems like they're working on it.

>But because it's an ad company with a bit of a research culture it can't imagine an integrated customer-centric general AI platform - which is where the next big disruption will come from.

I'm not sure what you mean by "general AI" but I think Apple has the best shot at it of any company working right now (unless you mean AGI).

Even there, Google is hindered by the fact that it is so bad at hardware. The Pixel's camera lead is not as much as it used to be, and now Apple is ahead on video. The newest phone from Google uses the same sensor as the Pixel 2, whereas others have moved onto bigger sensors with quad-bayer filters.


Pixel is Google being aware of their handicap in this in-house hardware prototyping and research area, and ironically and obviously and predictably for Google, they apply ML to it too.

Apple can come at this same problem from the hardware and software sides all at once, with their own internal dev cycles aligning with the yearly iOS/iPadOS software drops, and iPhone/iPad hardware drops timed to a month or two of each other, year after year. Sure, it’s buggy as hell, but it still works better than Android. One would hope so, since you can’t do proper sideloading, as Android natively supports.

Apple’s security argument is a childlike excuse for not doing one’s homework, not a reasonable justification for Apple unreasonably and intentionally feature-gating iOS and iPadOS devices. I’m an owner and I’m root. I fully control devices I own because that is a Natural Right of exclusive ownership of hardware devices under American First Sale Doctrine, which has also been recognized by the Library of Congress as Constitutionally-protected usage to preserve inalienable rights to nondiscrimination in computing devices.

So Apple can take a hike. We’re getting what we need, and we’re getting bugs fixed. Jailbreaking has surfaced more 0days than jailbreak devs squirrel away for the next rainy day after iOS updates. We need native full root support, full stop.

These research phones have landmines everywhere in the license agreements to get one, and I’m not re-buying an iPhone I already own to get half-assed fakeroot, especially when I’m already running as root on all the iPhones I ever bought. It’s not hard to avoid updates and save blobs, but should we really have to intentionally run old, known-insecure builds just to have r/w and root? Is this a mainframe? This is a joke.


What is Apple even arguing against? It’s not reality, that ought to be clear. They’re arguing for increased and continued profits for Apple, at the cost of our rights being trampled and violated, and were supposed to accept that they had our best interests at heart via increased security? Tell me another joke. Benjamin Franklin and me’ll be here all life.

The lack of free sideloading without a $99/year dev ticket is a joke. The scraps we get with 7 days between resigning apps is a joke. Devs are forced to abuse TestFlight to distribute apps which would otherwise be listed on App Store, if not for developers’ fears of App Store rejection, and potentially TestFlight revokes.

There’s gotta be a better way, but jailbreaking is the best we’ve got, for now. To that point, the Jailbreak Bot on Telegram is a public service, as is saurik himself and the entire reddit community r/jailbreak.



All that being said, Apple really is in a league of its own, with the market capitalization to back it up; Apple has features found in other companies while simultaneously being unlike every other company on Earth.





How is this at all related to discussing Apple’s position in AI/ML?

I agree that iOS should have some escape hatch where power users can sideload but the problem is that the current situation is really good for privacy. Apple is able to act as a powerful agent to the powerless user to force companies to respect the users device and privacy. You can see on android the same apps tracking the user in ways that just wouldn't be allowed on iOS.

If there was a way to sideload apps then Amazon would probably create a 3rd party app store to compete with the main one. This would probably result in major apps moving over so they can abuse users in all kinds of ways banned from the app store.

I'm not really sure what the middle ground is here where iOS continues being the privacy OS while also letting the user do whatever they want.

That’s not true. Jailbreaks exist, and always will. That’s the current status quo. Opening up sideloading will surface more bugs. Apple already has best-in-class built-in 2fa for Apple ID. To say they can’t just leverage Secure Enclave to do the heavy lifting is just to fail to make Apple’s argument for them. Security would benefit, because Apple would have more devs seeing and reporting more bugs if sideloading were possible. It’s a simple truism. Look at the new Microsoft.

In a sense maybe the iPhone (and Android phone) is a mainframe?

The user shares the system with ... the apps.

It still sounds like excuse-making because Apple are behind in their own chosen terms: their competitor launched on-device machine translation a long time ago. The only remaining part of Apple’s AI privacy story is insinuation.

Well, google’s predictive keyboard and speech to text leave Apple in the dust. As does their voice assistant. So... yes?

I’m not an Apple fankid but the predictive keyboard and speech to text work wonderfully on my iPhone and I use Siri everyday with no problems. Google’s is probably better, but the different is not a meaningful one from what I can tell.

As someone that bought the first Android phone and now has an iPhone...it is extremely noticeable how much better Android is on these fronts. But depending on the person that gap may not be important. For me, it is night and day. I felt Google had a long way to go on these fronts, so I was stunned at just how bad Apple was :(.

As someone who switched to iOS from android I’ll second this.

Also googles keyboard swipe is WAY better than the iOS implementation

Just switched from android to iOS and the major issue is it tries to correct things that are not real words. Somehow the android one was able to detect better when something is a typo of a common word or just something it doesn't know.

that's pretty much what I hear for anybody that has switched to iOS after a while on Android : they start by complaining about iOS's keyboard

If we're just going to give anecdotes, I'm usually an Apple fankid but I get super annoyed by autocorrect on iPhone. The Google one when I've used it does seem a lot better, and also faster.

I don't know anyone who believes that Siri is as good as Alexa. I spent four months self-isolating in an apartment with the small hockey puck Alexa devices in random corners and returned to my big open plan apartment with two paired HomePods on the kitchen counter.

The frequency with which Siri shits its pants (can't help, asks me to excuse it being slow as it tries to set a timer, mishears me, etc.) is honestly remarkable.

(Not to mention the fact that my phone continues to alert me to text messages read on my Mac or iPad whole minutes ago.)

Apple is still working to overcome deep problems in both its cloud infrastructure and AI/ML. If they cannot be honest about this, they should not be dishonestly trying to present a picture of all being well.

Apple says they do on-device, so yea, why is Siri "thinking" for 20 seconds when I ask it to set a timer for 10 minutes for the 1000th time

Show don't tell. You don't lead in AI or anything else by insisting everybody who noticed you're bad at something "has it wrong".

Did you read the article? The whole point was that Apple has innovated but that they don’t make a big deal out of it. Marketing, basically.

Yes, it's more marketing than research here. They still push their reality distortion spells on us to believe that they're an AI leader but in reality they innovate on the ideas of others; just like old times.

> Did you read the article?

I really hope you have read the HN guidelines [0]

> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."

[0] https://news.ycombinator.com/newsguidelines.html

I did not read the guidelines. My mistake. You’re right.

> Machine learning is used to help the iPad's software distinguish between a user accidentally pressing their palm against the screen while drawing with the Apple Pencil, and an intentional press meant to provide an input. It's used to monitor users' usage habits to optimize device battery life and charging, both to improve the time users can spend between charges and to protect the battery's longterm viability. It's used to make app recommendations.

The problem with machine learning altering the behavior of a device is it shortcuts human learning. The human brain is very good at learning deep insights about things and its environment and alters its behavior accordingly.

If things change while we're learning about them, it confuses and upsets us. A dumb machine is much easier to use than a "smart" one.

This might be true for mostly user interface stuff. But for things that are happening under the hood, a human mind will get accustomed to a gradually improving user experience.

Apples mantra seems to be to make everyone use the device as per their whims and fancy, and let the device figure out how to deal with it.

Ultimately what this leads to is a user experience lock in, where other devices that dont adapt feel clumsy or stupid.

Did the interviewer not know enough about ML to even challenge the bogus statements.

Yes, apple’s strategy is more privacy protecting.

But holy hell, yes larger data sets are going to be more accurate and the resultant model doesn’t have to run in the cloud, it can run locally.

AI training in a data center with large data sets, ship the model to local devices to execute.

That is ALWAYS going to be better and more accurate than what Apple is doing.

I agree that Apple is underrated here, and I think their rigid user interfaces for things like Photos hides the power of their platform.

It seems absurd to let Google Photos slurp up your data server side when your iPhone can do 80+% of the photo categorization automatically. It’s equally absurd that Apple has a glacial pace of change for the user side.

In my and several of my friends' experience, Google Photos search is simply much, much better than Apple Photos. Even though I use Apple Photos personally, I have to admit Google's is just better. My friends always say "if I can't search for my photos easily, what's the point in amassing a large collection?".

It's not just search. Google Photos "slurps" photos at 10-20x the rate of Photos.app with clear indication of what's happening and what's being uploaded.

Photos... just sits there. To get a new photo to my desktop it's faster to open Google Photos on my desktop and download the photo from there than wait for both Photos.apps to finish syncing.

It would be nice to have an option to not delete every photo after backing up to Google Photos.

Search for object detection on your iPhone. It’s way better than the experience on a Mac.

True, Apple has a big advantage by having a sense for the baseline-level of computing power on client devices, which is pretty high across the board.

The uniformity of UX is worth the glacial rate, IMO. The more ScrollViews with arbitrarily laid out content (Photos seems to have these increasingly), the less intuitive I find apps. In Photos, I find it hard to get a sense for what menu-depth I'm at. Apple has a very logical master-detail layout across most apps, and I hope with Mac Catalyst that Apple continues to enforce consistent UI practices.

The more freedom developers are given to put any old layout, the more apps start feeling the way Windows Forms, WPF, and UWP feels on Windows; it's like you have to start thinking about "Okay, when was this app designed?" to figure out how to use it.

plus all of those UI frameworks totally suck in one way or another

One person's slurping is another's always accessible

It could still be always accesible, without the middleman actually being able to read the data.

Yes, but also no. It's a less convenient flow, and indexing and such would be trickier than either centralized or on device, since you'd need to encrypt and distribute the index as well (or do homomorphic encryption, but I don't think it's ready for the limelight.)

I think most people are probably fine with the tradeoff between convenience and features that centralization offers, so unfortunately people that aren't are stuck with more niche products (or Apple, I suppose, although I don't think i-cloud is end to end encrypted?). I say they're stuck with niche products because I don't think it makes sense to try and offer a slider between convenience and privacy in one product, since the interface and feature sets would be so different.

Apple photos should introduce an unlimited compressed picture upload like Google photos does

I mean Google is analyzing your photos for that service. You can pay for iCloud storage. I’m totally fine paying 2.99/mo for my wife and I to not worry about our backups or having space on our phones for photos.

I just use my nextcloud server which has 3TB free. Works decent but doesn't integrate with the OS as well, still keeps my photos backed up which is the main thing.

Fluff Piece. Apple PR machine learning that techies read ars Technica.

Absolutely! Crafted to 1) Create interest and talent pipeline 2) identify and funnel user frustrations and comparisons.

An interview full of nothing.... When they say Google has no experience shipping user experience used by millions of people

> After a brief pause, he added: "I guess the biggest problem I have is that many of our most ambitious products are the ones we can't talk about and so it's a bit of a sales challenge to tell somebody, 'Come and work on the most ambitious thing ever but I can't tell you what it is.'"

JG, I don’t think that’s your “biggest problem” - Siri is. Your privacy centric on-device strategy limits your view of user feedback, Google gets a lot of shit wrong but they know how to transmit user data and understand their feedback.

I like this article. I like Apple’s approach to ML because it blends in. When applied, the feature should not expose that it’s based on ML — that’s a failure. And so Siri, Alexa, Google Assistant are failures. But Face ID, plam rejection are successes.

If you have to explain the customers that it’s ML based, that’s the same as asking for the customers to understand it’s unreliability. And unreliable features are worse than no features, and that’s why nobody uses Siri, Alexa, Google Assistant except for a few reliably-working requests.

Apple is a fashion house. They will be around for quite a while, and still have time to turn things around, but if they keep at their current pace, they will be back to irrelevance in 2 decades. Apple builds beautiful looking hardware. Software wise? Complete garbage.

It's the decline of Intel and the advancements Apple has made in designing its own processors that I think are really interesting. Having powerful and efficient processors on mobile devices - both laptops and phones - allows Apple to do edge computing in a way other companies haven't been able to, and integrate ML in much more privacy-focused way.

Aside from what they say, they still can’t get their iOS AI based spelling correction up to par.

Don't have iOS. But I do wish that Android with detect the current language you are writing in.

I have 3 languages on my keyboard and sometimes it suggests an autocomplete in the wrong language.

I have German, English and Polish and it works perfectly.

Not so great with Dutch, French and English though.


I was very surprised with how good it actually was for Polish.

It's a hard language and not as good covered usually with ML as English or German is.

It's like saying you are the smartest girl of the city.

When you have to say it yourself you probably aren't.

NVidia thinks they're an AI leader too, and they're probably closer to the truth.

Funnily, the one data that seems to matter for Siri is the web and wiki data for question answering. Siri still uses wolfram for many trivial questions. None of this is about privacy or user data and Siri is behind the state of the art here.

Apple is behind in AI. Google has Google Lens, real-time transcription of audio recordings, live captions. And Google Assistant is leaps and bounds ahead of Siri in nearly every way.

What is there in AI that Google doesn't beat Apple at?

Yes, Apple is behind in AI research and contributions. In every WWDC or product announcement they will try to convince us that their AI is with 'privacy in mind'. Well all know that's not true.

You have to get that data somewhere.

> What is there in AI that Google doesn't beat Apple at?

Excellent Question. Unless we get Apple's equivalent of OpenAI, DeepMind or FAIR then the answer is always nothing.


I haven’t seen an Android phone with good face recognition yet.

What is AI? Your washer machine and some rice cookers use AI (fuzzy logic)

Everything uses AI. A program with an if statement is AI.

One thing where apple is shining is brainwashing. It looks very funny when caravans of people are waiting store opening to hysterically buy an overpriced smartphone, even if it's technologically 2 years behind cheaper Samsung S20 (except chip performance, but it doesn't matter anymore, because there is no tasks that needs so much computing power)

> because there is no tasks that needs so much computing power

I think your own point explains why people are more than happy to use a more expensive iPhone despite technological differences.

I'm currently developing with a relatively new android phone, but I'd never use it as an actual day-to-day phone in lieu of my circa 2014 iPhone (believe me I've tried).

The extent to which I prefer the overall iPhone experience far outweighs any technological upper-hand that the android has. In the scheme of things, phone technology has barely shifted in 6 years; a supposed 2 year difference is utterly negligible.

Side note: what's the deal with the Google Play store??! I tried to download facebook messenger a few days ago for testing, and accidentally downloaded an entirely different app with a very similar name and icon as it somehow managed to occupy the top listing (ad?). Seems obscene that it's so easy to mount a phishing attack against android users.

I just switched from a pixel 2 to an iphone 11. For me it seems that iOS and Android are pretty level quality wise, they both have minor things that they do better but the number one thing that makes the iphone better to me is they are able to do all the awesome stuff android does without sending your data away for processing. Having image recognition without the privacy loss is worth the price tag to me.

> even if it's technologically 2 years behind cheaper Samsung S20 (except chip performance


You’re looking for a different website.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact