As an extra F.U., it also changes the list of contacts after a second. So I try to tap on my wife, only to have it substituted with the plumber who came once half a year ago... and this of course gets logged by the AI, ensuring the plumber continues to hold pride of place in my contacts.
It's slow. It encourages mis-taps. I have intentionally tapped it literally <10 times ever despite using the share dialog thousands of times, since it almost never shows me the desired contacts, and even then I've tapped the wrong contact half the time.
It's incomprehensibly awful, wantonly violates even the most basic user-interaction guidelines of the past few decades, and there's no way to turn it off. What in the world are they thinking / drinking.
When you are in the “...” screen, tap on “edit” at the top right.
This is what OP complained about is not possible on Android out of the box.
I know it exists as an intentional dark pattern (so we just think that's what happened). But it also seems so common now across computing and it pisses me off every time.
It seems like they stripped all metadata, including visual names of items themselves, and instead substituted random words.
The result is like playing a text-based adventure game without a list of the verbs the game supports.
Per memory, 7 and even 98 had a perfectly reasonable and accurate search.
If I type VS... Visual Studio Code! Cool. (wonder why it did not suggest Visual Studio itself which I also have installed, buy hey I got what I wanted)
If I type VSC... ??? config files and some random XMLs from the deep realms of AppData
If I type vscode... No results, try a web search!
If I type Visual Studio... THE Visual Studio shows, but no Code in sight
If I type Visual Studio Code... There it is again!
The whole rigmarole is just... Huh?! How does one even reach that point? I can't think about any naive buggy way that could reasonably cause such discrepancy of results. Just search by Filename and Display Name! Or whatever criteria, but be consistent!
But literally mystified why there isn't a prebuilt index table that instantly loads the top results.
All Windows apps / panels + last 250 files opened shouldn't be hard.
So there is literally already a textual, and usually interpretable, path to any window.
Apparently tying search into that made too much sense though, and so instead we get a reinvented (slightly square) wheel.
I love that term and the Android share dialog has always been my top example.
But then I thought of: "Percusssive enhancement". Maybe not semantically 100%, but... not 0%.
Asynchronous element loading saves time overall but it costs time when key UI elements rearrange. It's probably difficult to pull off but linear/blocked/sequential loading for the current viewport and offscreen asynchronous loading is probably what we need to avoid this (or ugly placeholders).
I find this happens with computer game UIs a lot, too, especially for dynamic UI elements that float above static UI elements. Especially when there is a lag due to animations.
Or maybe I have 15 years of expectation that images is going to be the second tab.
Edit: nope, just did a search, it started web images videos then right when the page finished loading it switched to web videos images
So only does search not work anymore, it's unnavigable
Edit 2: it's not just video, it rearranges the tabs based on relevance. "John wick" will move videos to the second spot. "San Francisco" will move news second and maps 3rd.
I get it, but I also don't.
A product manager somewhere in Google is excited for their quarterly bonus.
I just clicked on google ads 3 times in 2 minutes because it keeps showing image search results. Then juuuuust as you're about to click the row of ads shows up and you click on an ad.
The AI then congratulates itself on serving such relevant ads.
@9nGQluzmnq3M don't mean to mock you. It more that it sparked a general observation that seems to be the case now dasy . It is how 'funny' it is that now days we tend to all a lot of stuff AI. Back in the days pre-internet days, this is just some kind of preferences we stored per user bases. I can see that the IT industry is a lot of what is in fashion.
The sorts of things phones used to remember I'd never refer to as AI. A list of contacts sorted by the frequency with which I use them isn't AI. A list of contacts sorted in an order I don't understand, with a slight preference for frequent contacts, is.
The latter have proliferated recently, hence the shift.
Clearly, you want your list of "frequently searched terms" stored locally on your device in a very small and efficient history file.
However, if you store this file on the server, you can hide from the user what actually gets stored in it, it takes longer so it seems like it's doing harder work, and for some reason gets it wrong occasionally which means -- AI.
Google Maps is like this. It completely refuses to remember your recently searched addresses if you disable Location History (which includes remembering and storing, let's call it a little bit more info than just my recent search terms).
This would be such a prime candidate for storing securely, privately on your device, for any type of map service, that I can only conclude this is deliberate hostile anti-user programming.
Also I bet there's code out there that just returns most-recently-searched with a few deliberate mistakes to seem more opaque and thus more AIey.
I'm sure there are companies which play the various games you're suggesting, but I think positing that it's the rule verges into the conspiracy theoretical.
For sure, with all the type of Neural Structured Learning, seems like we are just trying things out by training models. Would be good to have a way to actually explain to us developers how decisions are actually made. I know it is based on some kind of statistics.
If anyone can point in the direction that would be greatly appreciated.
It's definitely not sorted by contact frequency or anything close to it, because many of the people I share with all the time never show up. As I type, three of my "top" 4 (including that plumber) are SMS, which is doubly weird since I almost exclusively use WhatsApp.
How often does a badly implemented algorithm that should in theory just work, get labeled AI because in practice it returns opaque results occasionally?
It's like they know what our intent is, and intentionally replace what we want with what we don't want.