Disclaimer: I'm the developer of FlickType and have been advocating for this change for months now, after being severely affected by scam competitors on the App Store. I have also filed a lawsuit against Apple, partly because of this.
This is a welcome change, but what’s most important is what Apple actually does with the reports - which I’m definitely going to be keeping a close eye on.
Of note, this “New” Report-a-Problem button is not actually new: it used to be there but Apple decided to remove it years ago, while App Store scams were - and still are - running rampant: https://techcrunch.com/2021/10/06/with-return-of-report-a-pr...
I am interested in what they will do around false reports. I hear about negative review attacks from competitors on Amazon listings, so it will be interesting to see how this is weaponized and how Apple responds.
It works as sabotage on Amazon listings because Amazon uses reviews with certain keywords as lead indicators for product contamination or other issues. So if you want to sabotage a competitor's vitamin pill the day before Prime Day, you have an entire building full of people leave variations of "taking this made me feel nauseous" or "this gave me a headache" and "weird chemical odor" after purchasing the product. Either the automated systems will flag it or complaints will flag it for review.
Software can't actually poison people in a physical sense, so generally the software marketplaces care less about signals from reviews like those.
Why are those reviews included at all if they weren't "verified purchases"? I guess there's some utility in it - you already own product X and want to let people know about something (positive or negative) but... this blatant sabotage from obvious non-customers/users is crazy.
You would pay your manipulators to leave verified purchases. In the black market for manipulative reviews that comes at a premium. You can also get similar effects with unverified reviews on some product types (consumables, beauty products, etc.)
It's not non obvious if you bribe someone who uses an American shipping address to do it.
They're hoping that it gives it a hazmat or pesticide false flag. Pesticide false flags are extremely common even for things for which it would make no sense whatsoever. Anything that says 'filter' for example will often result in a pesticide false flag.
I haven't worked on anything related to app stores in years so I'm not sure, but that's promising, as would "this posted without my permission," "this bricked my phone," "my phone ran so hot it burned my pocket," etc.
> Now App Store product pages on iOS 15, iPadOS 15, and macOS Monterey display a “Report a Problem“ link, so users can more easily report concerns with content they’ve purchased or downloaded.
It appears that reports are going to be attributed to actual accounts, which should enable Apple to identify abusers more easily.
> It appears that reports are going to be attributed to actual accounts, which should enable Apple to identify abusers more easily.
I don't feel X == Y in this case. Account creation is not a high enough bar to block abusers IMHO. Amazon reviews require accounts too, but that obviously doesn't stop the negative review attacks which GP mentioned.
Account creation is a very high bar when it comes to Apple services, because Apple "console bans" hardware devices associated with accounts that evidence fraudulent behavior. I expect they will only seriously consider reports from users who have an active Apple hardware device bound to their Apple account.
I also expect they'll link this system to a human review queue. If you send abusive reports for something that isn't scammy, it will probably not be considered scammy during human review, especially if you're responsive to questions asked by Apple during that review. There will, of course, be a 'first post' false positive someday when a reviewer makes the wrong call, but it's a safe bet that they have not implemented a Google/Facebook/Yelp system where your life and livelihood can be destroyed by an algorithm without human participation.
I’m sure like most models, the weight of your submission will largely be determined by multiple factors, such as the age of the account, how much money you’ve spent, whether you’ve spent money on the app being reported, how many devices are connected to the account and for how long, etc. Not all complaints are created equal and it allows them to attribute a “trust factor” across several metrics.
For a few thousand bucks and the right cause, I'll happily sell my vetted AppleID for this. I don't have anything important in there -- a ton of apps that I never use that I can't remove from my account, and years of history. It is of zero value to me, but apparently because of these checks, could be worth a lot of money to someone else.
I feel like App Store accounts require a credit card number, and they do an authorization for a few cents to check if it’s real. At least they did years ago when I tried to set up a relative’s iOS device with a fake credit card number.
I would assume that attacks similar to these could be checked against device ownership - for example if a user has been the first registered owner of an apple device, their reports may be deemed more trustworthy. If they see a device being reused to submit similar reports, that could be a signal that reports are fake.
Quite easy/cheap to pass that bar - have your 100-1000 sockpuppets spend a few $$ on a subscription (which may or may not be money coming back to you in some fashion), then buy competitor apps and report them as scams.
Possibly negated with some form of (even Apple-internal) web-of-trust measurement of account validity.
If you have an Apple developer account in good standing, at WWDC they introduced a separate reporting page for you to use, that doesn't require to you have purchased the app first. There's also a pre-existing, years-old process for you to report lookalikes and such.
Weird. I thought all the apps were already reviewed and approved by a specialized and trained committee before being blessed with a space in the App Store. What could we possibly need this button for?
Being an App Store developer is like intentionally walking into an abusive relationship and hoping that the pay-off will be worth the trouble.
I'm sure you're aware but, as an app developer, the review process seems random at many times.
Depending upon the phase of the moon certain features or metadata may be accepted or rejected in apps. Even when the exact same features have been accepted in prior app versions and app reviews (without changes to the actual guidelines).
In such cases, perhaps Apple's interpretation of the guidelines have changed. If so, there's no way for app developers to know this until they get hit with a rejection. Rejections rarely include explanation of the interpretation so it's up to the developer to try and interpret what the real issue is.
This is on top of the fact that, in many cases, Apple deliberately uses the guidelines and shifting interpretations to delay or destroy competitors. This occurs even if competitors are acting within the guidelines and all prior known interpretations of those guidelines.
Which is not to say that "Report a Scam or Fraud" is a bad policy.
App Store review can be random, AND this can be a good policy. (Well, good for consumers. I could see it causing developers some issues if competitors weaponize it.)
I, for one, like the f-droid model or you could say any sane package manager model.
When I download a package from f-droid repository, I know the folks at f-droid have the source code of the bits I downloaded. Same thing with Debian and Fedora. Why can't Apple do the same thing? Require developers to submit machine-readable source code along with build instructions. Build them directly on Apple servers. Sign them with Apple's certificates.
I am not an Apple developer. As a consumer, I would welcome such a change.
Now, it is true that you could have a web view that fetches application chrome/content from a web server but I would argue Apple should disallow such apps anyway.
While that might be a great idea, Apple is a competitor of many of the companies in the App Store. I'm not sure how they'd feel about their largest competitor having access to their absolutely-valuable-and-secret source code.
if we had to pay to submit comments and wait a few days or even weeks for it to go through the company's "stringent" rules before it is approved, then yes a flagging button is counterintuitive.
not really, you think it would be perfect? there are plenty of review systems that take literally years and still fail. If you're based in the USA you should be familiar with these.
i can only imagine the amount of trash reports they'll get.
"I clicked on this ad on facebook and bought something that never arrived!" Help i've been scammed!
"Why is this amazon product late??" Amazon App is a scam!
"They messed up my chipotle order and the app doesn't allow me to report the order as missing something".
I know none of these are really apple's fault/responsibility. But ask anyone who works in customer support - there is a really long tail of people who actually thinks all apps are by apple and apple should offer support.
This is why fraudsters do not fear report options. The high incidence of false positives and noise to signal guarantees a high threshold will of reports will have to be met before any action is taken, giving the scammers enough time to profit before action is taken.
In this case though it can flag a human review.. unlike something like Amazon where it’s harder, more expensive, and takes longer to check if something is a scam
The mobile carrier that I use has an app on the App Store that subscribers can install in order to view their subscription, change subscription plan, and view past and present bills/invoices. The app itself is pretty good, it's snappy and easy to use. Only shortcoming is that they have a widget that comes with the app but the widget doesn't work and they have been aware of this for a long time without fixing it.
Anyways, I was looking at the reviews and most were positive. But then there was one review that stood out. A one star review, and get this – the review was complaining about something totally orthogonal to the app itself. They were complaining about the prices of the plans the carrier is offering. The pricing of the plans that your carrier is offering is not relevant to managing your plan through this app.
These kinds of irrelevant complaints are sure to show up in reports too, like you say. But let's hope that the amount of those will stay relatively low.
If you read HN posts about Apple or Mozilla or Zoom over the past couple years, this exact behavior is wildly popular here too. Our commenters will hijack a discussion about software RAID or crypto to complain about their favorite pet peeves about app store policy or not liking compact mode or whatever.
The mods do their best to keep the discussions on topic, but I sure do wish more of us would agreed with you even when a given topic brings up intense feelings.
I used to monetize my Android apps with a well known platform that seemed honest. Until I started receiving reports from users that got scammed by some ad publishers using the platform, even got one of my apps suspended due to porn ads! This is not the fault of devs and it's really difficult to avoid.
Why did you quote my sentence without including the first part? It completely changes my statement.
Apple and Google do not automatically install or run 3rd party apps from the store. If they did, then yes, obviously it would be their fault, but they don't.
In other words, Apple has been ignoring blatant fraud like subscription fraud for years and something has now forced their hand.
I wonder what it was. Maybe someone realized that Apple should be considered complicit if they pocket 30% of the proceeds of the crime and continue to enable it?
“Be careful what you wish for, lest it comes true!”
- Aesop’s Fables (circa 260 BC).
This will probably not affect the big incumbent apps at all with all their resources and legal teams.
This can and probably will be used as a blunt instrument and weapon on the honest indie hacker. Another unnecessary high bar put in front of them.
The scamsters will probably be unaffected in the long run while the costs will be borne by the small guys. They are pros at gaming the system and will always figure out workarounds.
I hope I am wrong but the unintended consequences of penalties is always selectively applied to the weak and honest.
We already see similar results play out everywhere else and we never seem to learn.
There's a particularly nasty type of scam that is the "Gazillion bucks a month subscription" app, that I have encountered a few times, but have been unable to report.
It will only matter if management is responsive to reports (see for example the youtube crypto livestream scam problem, which has been going on for almost 3 years).
I just wish this was available to apps you haven’t purchased yet. There are some obvious scams and I cannot report them until I purchase .. kinda defeats the purpose for this use case.
Why not just allow web installs without the app store?
You can report frauds against an app's signature, and centralized command and control can disable apps that attempt to harm users. (With opt-out if you care.)
> You can report frauds against an app's signature, and centralized command and control can disable apps that attempt to harm users. (With opt-out if you care.)
The only new thing here is that you can opt-out.
Fraud/Scam is a real issue, and controlling your device is a real issue. But they're two issues.
This is a welcome change, but what’s most important is what Apple actually does with the reports - which I’m definitely going to be keeping a close eye on.
Of note, this “New” Report-a-Problem button is not actually new: it used to be there but Apple decided to remove it years ago, while App Store scams were - and still are - running rampant: https://techcrunch.com/2021/10/06/with-return-of-report-a-pr...