Hacker News new | past | comments | ask | show | jobs | submit | AshamedCaptain's comments login

Why is this _always_ repeated? Where are these patents? Where are the examples of eInk going against their competition??? Because you have a _myriad_ eink-like technologies from many other companies, most of them literally better than eink, that were available but were abandoned after they failed in the market.

One example I particularly liked is Mirasol, who was abandoned despite being owned by Qualcomm out of all companies (HIGHLY unlikely to be scared by a patent troll, considering Qualcomm could be arguably described as a patent troll themselves).

It's simply ridiculous to think that eInk would torpedo their own technology out of incompetence/malice/whatever yet these ideas keep being parroted here without _any evidence whatsoever_ as if it was gospel from the gods.

The real reason, of course, is that this technology is hard (plain physics), and that there's little investment because most consumers could not care less. The supposed advantages of eink are paper-thin at best (contrast sucks and keeps getting _worse_ after each generation, and that is without taking into account the color ones), customers have a hard time distinguishing it from other technologies such as reflective/memory LCDs (which practically beat them in every metric you can think of, even power usage -- except for long enough periods of idleness which are not of interest to any consumer), and at the end of the day most people will choose a backlighted LCD over all these alternatives anyway...

See Garmin, which started with reflective LCD watches for outdoor usage, and the moment they experimented with a plain old fugly backlighted LCD they decided to replace most of their series, _even the ones for primarily outdoor usage_, with backlighted LCDs (e.g. Fenix 8). Customers just buy shiny flashy screens more, what can you do about that?

eInk survives because they're actually one of the cheaper techs, which is the only reason talking about "billboards" is even remotely plausible, and even then they're having a hard time.


There's a lot wrong in this comment.

Eink B&W screen contrast has been improving dramatically with every generation, but there was a significant backward step in the jump to color eink screens (due to how the current Kaleido technology works). The Gallery technology does not suffer this lack of contrast, but the trade-off is that screen refresh times are slower than 1st generation e-ink panels.

Garmin still uses reflective LCDs, even on the Fenix 8. The AMOLED is a separate SKU.

Eink is superior to transflective LCDs in terms of power use as it only needs to be refreshed when content changes; an LCD must be refreshed multiple times per second. Only bistable LCDs can display an image without power but this comes at the cost of resolution and contrast.


> Eink B&W screen contrast has been improving dramatically with every generation,

No: https://blog.the-ebook-reader.com/2021/01/20/contrast-on-e-i...

Ever since Carta it has been stuck at 15:1 and it is trivial to see that e.g. Remarkable has better contrast than the newer (B&W) Kobos.

As I said, this has _nothing_ to do with the color screens where the contrast is even further reduced, _even_ in Gallery (by eink's own specsheet, as well as by plain observation on a newer remarkable color).

> Garmin still uses reflective LCDs, even on the Fenix 8. The AMOLED is a separate SKU.

No. The _reflective LCD_ one is the one which has become the different SKU (it is now called the 'solar'; the main series now all use backlight), and guess which new SKU is neither stocked nor displayed on stores. It used to be that "Epix" was the AMOLED version of the Fenix, but now it has replaced the mainstream Fenix. As a fan of the reflective LCD garmin watches (since the 1st generation Fenix), the writing is on the wall.

> Eink is superior to transflective LCDs in terms of power use as it only needs to be refreshed when content changes; an LCD must be refreshed multiple times per second.

However eInk requires _significantly more_ power when refreshing than an LCD, not to mention a more complex controller, while at the same time the power required for refresh by a memory LCD is practically negligible. So, as I said, unless your usecase involves the eink panel staying static for _days at a time_, LCD will win.

And no customer really wants a screen that is only refresh once every week; it defies the point of a screen. I could even say the same of a "dynamic" billboard. There's a reason even price stickers at shops use LCDs.

Is there nowadays at least some eink watch that can surpass the battery life of the reflective LCD Garmin watches? (measured in months even with at least one screen refresh per minute). Note that many "eink" smartwatches actually use memory LCD, and not a eink panel, behind the scenes. (e.g. Pebble). Furthering my "users cannot even distinguish eink from reflective LCD" argument.


Ever since Carta it has been stuck at 15:1 and it is trivial to see that e.g. Remarkable has better contrast than the newer (B&W) Kobos.

This is false. Carta is the B&W family of eink Panels...The most recent one (the Carta 1300) has significantly improved contrast over the 2021 era panel, the Carta 1000. It's trivial to see that, and nobody looking at the most recent Kobo B&W would claim that it has less contrast than a 2021-era device. The Remarkable 1 uses a custom co-developed version of the Canvas panel which has reduced the thickness of the touchscreen layers and other layers above the eink panel, which is the primary cause of reduced contrast in e-ink devices (including the Remarkable 1). (Remarkable 2 uses a custom co-developed version of Gallery, which has greater contrast and amazing color but slower refresh times than Carta or Kaleido.) If you ever get your hands on the eink hardware itself, you would be amazed at how much contrast even the 1st gen panels have...and how much contrast you lose to all the layers that get added above the panels to make them durable and usable in handheld devices.

The _reflective LCD_ one is the one which has become the different SKU... and guess which new SKU is neither stocked nor displayed on stores.

Both the AMOLED and the Solar Watch are separate SKUs with the display in the name. There is no "base" Fenix 8 anymore. And on that note, the closest 5 Best Buys and REIs to me all stock both SKUs for immediate pickup.

So, as I said, unless your usecase involves the eink panel staying static for _days at a time_, plain old LCD will win by far.

This is also false. There have been a number of transflective ereader devices on the market. They get worse battery life and have significantly worse contrast (without backlighting) than their eink counterparts. Seriously dude, if tranflective LCDs got better battery life and had competitive contrast to eink panels, do you really think that every ereader company including Amazon would still be using eink panels over cheaper transflective LCD panels?


> The most recent one (the Carta 1300) has significantly improved contrast over the 2021 era panel, the Carta 1000. It's trivial to see that, and nobody looking at the most recent Kobo B&W would claim that it has less contrast than a 2021-era device.

Well, I have linked an article making such claim. But how much has Carta 1300 improved the contrast, exactly? eink has stopped publicizing the contrast ratio on the public specs, just the marking BS that says the contrast ratio is improved (over what?), so I'm fearing the worst. I bet you it's still 15:1 (as Carta was on 2013) on paper or rounding-error level close to that, which explains why most users would see contrast as becoming worse.

> The Remarkable 1 uses a custom co-developed version of the Canvas panel [...] has reduced the thickness of the touchscreen layers and other layers above the eink pane

This is marketing BS. No such thing as canvas panel. It's Carta.

Also, RM1 has no other layers. Stylus input is wacom (below substrate) and there is no frontlight. On RM color pro they made stylus input capacitive AND added frontlight which may arguably have increased touchscreen layer thickness, leading to the perceived reduction in contrast. But ironically enough even eInk spec says Gallery has lower contrast than Carta (around 1:12 for Gallery 3), so no comparison is needed there. Unsurprisingly, all reviews say contrast has taken a hit.

> you would be amazed at how much contrast even the 1st gen panels have

The early panels were utter crap. There's a reason you couldn't not even put glass on top of them and things like "infrared touchscreens" were a thing on ancient e-readers (google for them, if you're curious). The improvements since ancient panels have been significant -- they used to have contrast ratios worse than 8:1, and Pearl and Carta raised that to 15:1. However, it is still ridiculous compared to contrast in most other screen technologies (even memory LCD can reach 20:1 https://www1.futureelectronics.com/doc/SHARP/LS013B7DH03.pdf). And has it improved at all in the last decade?

Not blaming eInk: there is a physical limit to contrast for their tech.

> Both the AMOLED and the Solar Watch are separate SKUs with the display in the name. There is no "base" Fenix 8 anymore

If you google, or if you click on the product, you or if you choose the cheapest one, or if you walk to a physical store... you will be offered the AMOLED one. It used to be that you had to go out of your way to get the AMOLED line. Now it's all in your face. I do not have product sales numbers but it's still rather obvious to me they're focusing on the AMOLED one.

> Seriously dude, if tranflective LCDs got better battery life and had competitive contrast to eink panels, do you really think that every ereader company including Amazon would still be using eink panels over cheaper transflective LCD panels?

Memory LCD panels are _not_ cheaper, and most definitely not at this size. I'm not even sure they are manufactured at such sizes, either.

ebooks are the only thing that defies the overall trend, maybe because e-ink practically defines the product line; but they are becoming even more of a niche market -- most people seem to have no problem doing their reading on a backlighted LCD iPad.


Yeah, I mistyped with the Remarkable 1 display. I meant to say it's just a custom co-developed Carta panel that they were calling Canvas because it had significant proprietary changes from Remarkable.

even memory LCD can reach 20:1

For a screen 1.25" diagonal. Not competitive unless your ereader is dedicated to haikus. Carta 1000 was 15:1, Carta 1200 claimed a 20% improvement, and Carta 1300 claimed another 15% improvement, which puts Carta 1300 at a 20:1 ratio, which is about right based on real-world reviews of the most recent Kobos. And this is for devices with 7 to 13 inch screens, not 1.25 inch screens. Kaleido adds a color layer on top, which reduces contrast in Kaleido devices. Gallery has higher contrast when using color (but you would be correct that when sticking to B&W only Gallery has lower contrast).

And has it improved at all in the last decade?

Yes, significantly. You have decided it does not and reject all evidence to the contrary.

If you google, or if you click on the product, you or if you choose the cheapest one, or if you walk to a physical store... you will be offered the AMOLED one.

Definitely false. REI will try to sell you the Solar one (for obvious reasons). Best Buy will sell you whichever one you want, but will try to steer people toward cheaper watches like the Forerunner or Instinct that people are more likely to actually buy.

Memory LCD panels are _not_ cheaper, and most definitely not at this size. I'm not even sure they are manufactured at such sizes, either.

Alibaba says otherwise, and that's just a 5-second search. It appears that I can order 10 10-inch transflective displays for $200....which is about what it costs to acquire a single 10-inch Kaleido 3 screen. Or in other words, transflective screens are about 1/10th the cost of a comparably sized e-ink panel. Which brings us back to this: If transflective LCDs were actually superior to eInk panels for the e-reader use case, why is every ereader company sticking to eink? Why is notoriously cost-conscious Amazon sticking to eInk, when transflective LCDs would be far cheaper to make at scale? (Hint: it's because eInk is better for the ereader use case.)


> For a screen 1.25" diagonal. Not competitive unless your ereader is dedicated to haikus.

> Alibaba says otherwise, and that's just a 5-second search. It appears that I can order 10 10-inch transflective displays for $200....which is about what it costs to acquire a single 10-inch Kaleido 3 scree

Do not confuse memory LCDs with generic reflective LCDs. Memory LCDs are the ones I mention as having lower power usage during refresh, as well as the ones I mention as having higher price than eInk, as well as the ones I mention as not even being available in larger sizes AFAIK.

> Yes, significantly. You have decided it does not and reject all evidence to the contrary.

What evidence? The only thing I have explicitly discarded is PR's "XX% improvement" messaging because it is imprecise and has been wrong in the past. For example, Gallery 3 contrast ratio is around 11.7:1 ( see Table1 of https://confit.atlas.jp/guide/event-img/idw2022/EP1-02/publi... ) , significantly worse than Carta . I cannot find a similar measurement for Carta 1300, so I am at a loss, and since the last published number is 1:15, and reviewers mention the new screens as being _worse_...

> Definitely false. REI will try to sell you the Solar one (for obvious reasons). Best Buy will sell you whichever one you want, but will try to steer people toward cheaper watches like the Forerunner or Instinct that people are more likely to actually buy.

Sigh... What point are you trying to make here? You do not agree that Garmin is pushing the AMOLED ones over the reflective LCD ones? Do you realize the Forerunner and the Instinct series are also AMOLED or getting replaced by AMOLED? You disagree that Garmin 's trend is clearly towards AMOLED? In that case, you should definitely go and extinguish a couple fires happening on the Garmin user communities...

> Which brings us back to this: If transflective LCDs were actually superior to eInk panels for the e-reader use case, why is every ereader company sticking to eink?

Because e-ink is cheaper! I have said it even on my original post: eink is the only one who survives because they're the cheapest one. Plus, I believe, because e-readers are anyway becoming a niche mostly tied to e-ink, and getting utterly displaced by, e.g., phones and tablets in the market.


One example in MS Word is the ribbon. It is relatively recent invention, and when it was introduced, _at least_ they went to the effort of using telemetry to guess which features where actually often utilized versus which ones where not, and designed the ribbons in accordance.

Nowadays every new "feature" introduced in MS Word is just randomly appended to the right end of the main ribbon. As it is now you open a default install of MS Word and at least 1/3 of the ribbon is stuff that is being pushed to the users, not necessarily stuff that users want to use.

At least I can keep customizing it to remove the new crap they add, but how long until this customization ability is removed for "UI consistency" ?


To add insult to the injury, it now looks like AMD "officially" supports ROCm on 7800 XT and lower _but only on Windows_. Compare:

https://rocm.docs.amd.com/projects/install-on-linux/en/lates...

https://rocm.docs.amd.com/projects/install-on-windows/en/lat...


To what injury? Was this bad news to begin with?

Rocm is kind of injury in general. Many people in the past have gotten burned by assuming that AMD would support some of their most popular and powerful new hardware, and then they just don't.

It seems like they're getting better at it, somewhat.


Someone needs to help stop misinformation:

I have run two 6** level cards on both windows and Linux just fine. These cards have the same llvm target (gfx1030). It taps into ROCM for inferencing just fine.

So whatever you are saying is categorically a lie or a misunderstanding. To be frank, it is particularly terrible misinformation because you can get a 4070 16gb equivalent from AMD for less than $400.

7** series has RDNA3, but that doesn’t preclude prior cards from providing ROCM support. The 6800 with RDNA2 has way better specs. People need to do their own research.

Edit:

You can run two 6800s for less than $800 and have 32gb of VRAM at 4070 specs.


Yeah, AMD. They're providing the misinformation by not updating those pages. I use an rx 6800 on Linux just fine with rocm, but the compatibility page only lists it as compatible on Windows, not on Linux. AMD needs to make sure their docs are up to date.

Touché Touche. I forgot to shit on on AMD, they create their own problems.

And people keep wondering why Nvidia has no competition from AMD.

They are trying their best to lose the current generation to no one. It's pretty funny when your own engineers can't get access to the cards you've supposedly launched.

Who are you referring to by "loosing"?

I expressed the same idea here not too long - the value of any one individual paper is exactly 0.0 - and was downvoted by it, but I believe this is almost the second thing that you learn after you publish, and what seems to confuse the "masses" the most.

You (as a mortal, human being) are not going to be able to extract any knowledge whatsoever from an academic article. They are _only_ of value for (a) the authors, (b) people/entities who have the means to reproduce/validate/disprove the results.

The system fails when people who can't really verify use the results presented. Which happens frequently... (e.g. the news)


And reduce strain on the screen, and bandwidth when taking screenshots/screencasts..


Most of the garbage aspect is because toolkits refuse to support multiple monitors on DPI with X11 with the argument that "Wayland is just around the corner", for decades now.

For example Qt does per-monitor DPI just fine on X11; it's just that the way to specify/override the DPI values just sucks (an environment variable).

This stupid decision is going to chase us until the end of times since Xwayland will have no standardized way to tell its clients about per-display DPI.


It's not useful if you have to specify a scaling factor before the application has started, when the application can move monitors.

This is something feasible on Wayland, X draws one large wide screen display.


X could do seveal different screens I did have this working once. However then moving an application to a different display was impossible (an app could do it but it was a lot of work so nobody bothered). I few cad programs supported two streens but they were seperate and the two didn't meet.

Most people want to drag windown between screens and sometimes even split down the middle. One large display supports that much easier so that is what everyone switched to in the late 1990


I was using it that way until about 2020. (Mint 13 MATE supported, but it seems that capability was lost somewhere along the line. Shame, because I have a dual monitor setup where the second monitor is often displaying the picture from a different device, so in that situation I absolutely cannot have applications deciding to open on the busy-elsewhere screen. I miss being able to set a movie running on one monitor and have it not disappear if I flipped virtual desktops on the other!)


Yes, separate screens would be a much better model for me as well. Much better than KDE randomly deciding to show KRunner on the turned off TV for some reason unless I manually disable the output.


X11 does a lot of things that are outdated now, and multiple independent screens is one of them. Ideally, you'd be able to select either independent screens or one big virtual screen, and still have a window manager be able to move windows between independent screens. I don't know how that would be achieved though.

X's "mechanism, not policy" has proven to be a failure. Without a consistent policy you end up with a pile of things that don't work together. "One big virtual screen" is a policy and it's one that works well enough.

Deciding what needs to be in the protocol and what should be in the application is never easy. It's important to be able to iterate quickly and avoid locking in bad practices. I think that's what Wayland tried to do by making everything a fine-grained extension, but it doesn't really work like that as the extensions become mandatory in practice.


For the record: you can specify a different DPI _for each monitor_ for Qt X11. You just cannot change it after the program has started, which is exactly the limitation I was referring to.

But you can definitely move windows to another monitor and Qt will use the right DPI for it. It is the same behavior as Wayland. "One large wide screen display" is exactly how Wayland works...


X11 used to provide separate displays, but at some point due to hardware changes (and quite probably due to prominence of intel hardware, actually) it was changed to merged framebuffer with virtual cut out displays.

In a way, Wayland in this case developed a solution for issue its creators brought into this world first


It can still provide seperate displays. The problem is you couldn't do something like drag a window from display 1 to 2°. IIRC it's also annoying to launch two instances of a program on both displays. The hacky merged framebuffer thing is a workaround to these problems. But you can have independent DPIs on each display.

° For most programs.


Yeah there were certainly tradeoffs. It's much harder to use separate displays now, though - last time I tried, I could address the two displays individually (":0.0" and ":0.1") if I launched X on its own, but something (maybe the display manager?) was merging them into a single virtual display (":0") as soon as I tried to use an actual desktop environment. (This was was Mint 20, MATE edition, a few years ago - I gave up and reverted to a single-monitor setup at that point.)


> It's not useful if you have to specify a scaling factor before the application has started, when the application can move monitors.

Windows does this. Try to use in Windows 2 monitors with 2 different scalling factors. It is hit or miss. 100 and 150 works. 100 and 125 doesn't.


Yes. It just proves that all you needed is a better way to specify the per-monitor DPI, one that can be updated afterwards, or even set by the WM on windows.



I would really like to see a concrete, legit way to materialize a "100M raise in market cap" into actual ROI ...


When the market cap rises, price of shares goes up? Do you know what a market cap is?


Yes, but the company doesn't get more money from that. The only, way to get money out of it is by selling shares at the new price.

However it would also raise future revenue, which should be what's reflected by the market.

So it would still be something that's good for the company, but not nearly 100B good.


You dont think AMD being competitive with Nvidia (3,37 trillion USD MC) would be "nearly 100B good"? Believe it or not the only reason thats not the case is good bug-free software. Thats what tinygrad is doing


Again, market cap is not what the company has. It's what the market believes the company should be worth accounting for future earnings.


It always rub me the wrong way that YouTube puts a "this is a state actor" disclaimer on a video uploaded by the well-known public media corporation of a western democracy, but put zero disclaimer whatsoever on a random video uploaded by an anonymous account created 2 minutes ago.


There is another reason which I dislike this which is that now Apple has reason for "encrypted" data to be sent randomly or at least every time you take a picture. If in the future they silently change the photos app (a real risk that I have really emphasized in the past) they can now silently pass along a hash of the photo and noone would be the wiser.

If an iPhone was not sending any traffic whatsoever to the mothership, at least it would ring alarm bells if it suddenly started doing so.


Isn't this the same argument that they can change any part of the underlying OS and compromise the security by exfiltrating secret data? Why specific to this Photos feature.


No. GP means that if the app was not already phoning home then seeing it phone home would ring alarm bells, but if the app is always phoning home if you use it at all then you can't see "phoning home" as an alarm -- you either accept it or abandon it.

Whereas if the app never phoned home and then upon upgrade it started to then you could decide to kill it and stop using the app / phone.

Of course, realistically <.00001% of users would even check for unexpected phone home, or abandon the platform over any of this. So in a way you're right.


The post also said that now phoning home isn’t an alarm that Apple could subvert the Photos app by passing a hash of the photo (presumably sensitive data). My contention is that Apple could do that for virtually any app that talks to the mothership, and is not unique to Photos.


Which is why I point the dangers of accepting this behavior as normal. I'm assuming you mean they could siphon the hashes of my photos through any other channel (e.g. even when calling the mothership to check for updates), but this is not entirely true. For example, were I to take a million photos, such traffic would suspiciously increase proportionally.

If you accept that every photo captured will send traffic to the mothership, like the story here, then that is no longer something you can check, either.

In any case, as others have mentioned, no one cares. In fact, I could argue that the scenario I'm forecasting is exactly what has already happened: the photos app suddenly started sending opaque blobs for every photo captured. A paranoid guy noticed this traffic and asked Apple about it. Apple replied with a flimsy justification, but users then go to ridiculous extremes to justify that this is not Apple spying on them, but a new super-secret-magic-sauce that cannot possibly be used to exfiltrate their data, despite the fact that Apple has provided exactly 0 verifiable assurances about it (and in fact has no way to do so). And the paranoid guy will no longer be able to notice extra per-photo traffic in the future.


I don't understand these conspiracies, why would Apple put so much thought & effort into implementing security & privacy measures, so much as participating in CFRG and submitting RFCs, publishing papers, technical articles, etc. only to maliciously subvert it. If and when they do, they WILL get caught out, and they will lose something valuable that they hold, goodwill. This is a good case to apply Occam's razor.


They _get_ caught (e.g. this, CSAM, etc.). People have ridiculously short memory spans. And in the meanwhile Apple gets to benefit from "privacy first" advertisements even though the actual privacy improvements are unclear if anything.

One example of this effect is how during the CSAM scandal some people were under the wrong impression that Apple was the first to do on-device image classification. Actually they were close to the last to do it. Even Samsung (not well known for their privacy) was doing it locally. But this didn't prevent Apple from full-page advertisements claiming so.

Or Apple selling Secure Boot, Remote Attestation, etc. as technologies for "user" privacy when 20 years ago Microsoft out of all companies tried the same thing (remember Palladium) and was correctly and universally panned for it. What makes Apple so different? They're even more likely than MS to subvert these technologies in a "tie-users-to-my-hardware" way.

Whenever Apple has the opportunity to take simple, risk-free, actual privacy solutions (such as, well, allowing you to _skip their servers altogether_) they often take the complicated, trivially bypass-able approach, and claim it is because for user friendlyness. This is intentional: a complicated approach allows you to claim "sorry, implementation error!" whenever there is an issue, and avoid the appearance of maliciousness.


I'm aware of the CSAM, the execution of it wasn't surreptitious and had a huge backlash. The conspiracies I'm talking about is maliciously subverting a protocol that they've engineered secure in secret. These are conspiracy theories and have never eventuated. That's all I'll really say on the matter.


And which they silently do, change the applications. Maps has been updated for me via A/B testing. Messaging too.


Any app can do this really, just can’t update the entitlements and a few other things. I would think it unlawful for Apple’s own apps to have access to functionality/apis that others don’t…


Samsung at least does these "dog" cataloguing & searches entirely on-device, as trivially checked by disabling all network connectivity and taking a picture. It may ping home for several other reasons, though.


Apple also does the vast majority of photo categorization on device, and has for years over multiple major releases. Foods, drinks, many types of animals including specific breeds, OCRing all text on the image even when massively distorted, etc.

This feature is some new "landmark" detection and it feels like it's a trial balloon or something as it simply makes zero sense unless what they are categorizing as landmarks is enormous. The example is always the Eiffel tower, but the data to identify most of the world's major landmarks is small relative to what the device can already detect, not to mention that such lookups don't even need photo identification and could instead (and actually already do and long have) use simple location data and nearby POIs for such metadata tagging.

The landmarks thing is the beginning, but I feel like they want it to be much more detailed. Like every piece of art, model of car, etc, including as they change with new releases, etc.


Does or doesn't. You can't really tell if and when it does any cataloguing; best I've managed to observe is that you can increase chances of it happening if you keep your phone plugged in to a charger for extended periods of time.

That's the problem with all those implementations: no feedback of any kind. No list of recognized tags. No information of what is or is to be processed. No nothing. Just magic that doesn't work.


With embeddings, there might not be tags to display. Instead of labeling the photo with a tag of “dog”, it might just check whether the embedding of each photo is within some vector distance of the embedding of your search text.


Yes and no. Embeddings can be used in both directions - if you can find images closest to some entries in a search text, you can also identify tokens or phrases closest in space to any image or cluster of images, and output that. It's a problem long solved in many different ways, including but not limited to e.g.:

https://github.com/pythongosssss/ComfyUI-WD14-Tagger

which uses specific models to generate proper booru tags out of any image you pass to it.

More importantly, I know for sure they have this capability in practice, because if you tap the right way in the right app, when the Moon is in just the right phase, both Samsung Gallery and OneDrive Photos does (or in case of OneDrive, used to):

- Provide occasional completions and suggestions for predefined categories, like "sunset" or "outwear" or "people", etc.;

- Auto-tag photos with some subset of those (OneDrive, which also sometimes records it in metadata), or if you use "edit tag" options, suggest best fitting tags (Samsung);

- Have a semi-random list of "Things" to choose from to categorize your photos, such as "Sunsets", "City", "Outdoors", "Room", etc. Google Photos does that one too.

This shows they do maintain a list of correct and recommended classifications. They just choose to keep it hidden.

With regards to face recognition, it's even worse. There's zero controls and zero information other than occasionally matched (and often mismatched) face under photo properties, that you can sometimes delete.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: