I hate when using euphemism slides into flat out lying like this. They are not "unable" to reinstate the account, in fact they are the only party able to reinstate the account, that's why the account holder was contacting them instead of someone else. They are "unwilling" to reinstate the account.
I know it's all just bullshit but it bothers me anyway.
If not for implications like this, almost every single use of "unable" (or "can't", for that matter) ever in a sentence would be "lying" unless something is against the laws of physics.
But I'm not "unable" to issue a refund.
In another case I may say "hm it's out of warranty but you know what, it really shouldn't have broken like that and you're a good customer, so I'll give a refund anyway." I can do that because I am able to issue a refund.
As for their policy, they are both the authors and interpreters of their own policy, so the "my hands are tied" argument is pure BS. If they are unable to reinstate accounts, why do they have an appeals process at all?
"I cannot continue this relationship"
"I can't kill this guy"
"I just can't eat meat anymore"
"I cannot continue like this"
These are all examples where someone clearly could for physical reasons, but they can't for other reasons they are bound to, whatever these reasons are.
However the key here is exploiting the ambiguity.
‘We are unable to’ is a cowardly way of saying ‘we choose not to’, or ‘our policy dictates’.
"We're unable to" shifts responsibility to something vague, unspecific. It's like the "run around" only with this phrase you've been redirected to /dev/null. I'm glad the OP said something.
> In your case, we have detected invalid traffic or activity on your account
The rejection email isn't isolated, so when it says "unable" it's conveying the implied message "unable to reinstate your publisher account <without bending the rules because we think our ad fraud detection systems were correct>".
The only takeaway is that there is no obviously correct wording for google to make when wielding the banhammer.
Well, that is, strictly speaking, a false statement. I agree the discussion has been civil; but consider that the support to Google's position requires adding phrases to their statement which fundamentally change the statements meaning.
I'm not sure what it's called, something like the opposite of a strawman attack. It is indeed simple to defend a statement when you give yourself permission to rewrite it in a post hoc fashion. It's an argument I find uncompelling, no matter how civilly it's presented.
This is true on kh because ‘correct’ is both a relative term and a black and white one, which doesn’t apply to the situation we are discussing. You have to decide what you value, and perfection is not an option, so there is indeed no ‘correct’ wording, but that doesn’t mean there aren’t better wordings, nor does it make the one they chose ok.
It seems to me that there are many ways Google could be more honest and less opaque.
It seems like quite a few people here (myself included) value those traits and would like to see Google adopt them.
That won’t immunize them from criticism or make everyone agree with their judgements, but it would still be better.
Can you explain what value it adds in this specific case?
Whether they use "unable" or "choose not too" shouldn't matter.
Just treat it the same.
I don't think there would be many complaints if Google didn't take down apps for vague reasons.
No lying, no ambiguity. They can't reinstate this account.
Should they change their policy so that after that change, they can? Maybe, but good luck getting them to.
A policy is just their way of doing things, written down.
It’s not magic.
> someone -including you- could then actually sue us for unfair business practices
To reiterate, "policy" is just explicitly specifying "that's just the way we do things around here"
There are plenty of policies that state that they may be changed by one side or that exceptions may be made in cases where the wording doesn’t cover the intent.
The Apple store policy for example says this.
In this particular case, the ambiguity is exactly that - Google didn't say what what real policy was broken or how.
Given the information in the article, you can't verify any of the traffic or actions that were supposed to break policy. After all Google wrote, "We understand that you may want to know more about the issues that we’ve detected. Because this information could be used to circumvent our proprietary detection system, we’re unable to provide our publishers with information about specific account activity."
"We understand that you may want to know more about the crimes that we’ve detected. Because this information could be used to circumvent our proprietary policing system, we’re unable to provide defendants with information about alleged criminal activity."
You’d think there would be examples of people doing this, but so far I haven’t heard of them.
It’s standard corporate doublespeak. Just because a lot of corporations do something by habit, doesn’t make it disingenuous to critique.
Google in particular tried to be a different and less evil kind of company.
We know that ship has sailed, but it doesn’t mean the values that it spoke to are any less important.
I'd personally call it an 'absuse' or 'misuse' or... 'misleading use' of language, but you do you.
That we are aware they are dishonest does not make them less so.
Sometimes that's true. Most times they hope you will believe it's true. Many people have been trained to believe it, true or not.
A business concerned you may take your custom elsewhere will not insult you this way. It is the mark of a monopolist or a crook.
I will make a point of remembering your relationship with truth.
It could just be that they know every other option will treat you in the same shitty way. I suppose you could stick that under 'crook'.
The language is intentionally deflecting responsibility by obscuring the source of the restriction. They're trying to reduce argument by giving the impression responsibility exists in some other unidentified channel.
Without reasoning we cannot tell if the auxiliary verb is even correct.
“I can’t eat meat anymore because it’s illegal”, really should read “I shouldn’t eat meat anymore” as although it’s a bad idea you’re still physically capable of eating meat.
I think the issue we’re talking about is ambiguity, and this really just emphasises the point.
It could be as simple as "Google management reviewed this app and decided it cuts into the bottom line of some service offered by Google." If the low-level person writing emails is aware of this fact then it would be reasonable to understand why they are unable to share the true motivation for suspension. What is more likely is the low level email writer looks up the account and the reason for suspension listed is, verbatim "Ad fraud - <3 Mgmgt." Then the low-level person would not be stretching the truth at all when they say they are unable to reinstate the account and are unable to provide more information.
I agree that the person writing isn't lying, but the language is deflecting responsibility away from management choices. The clearest wording from the prospective of the writer would be "I am unable because we are unwilling...".
I agree with those who feel it's important to keep a distinction between externally imposed limitations and internally imposed limitations.
I mean it is like "I'm unable to return the money [as it would be against my policy to do so]".
Though it is even worse than that as Google wouldn't even say which policy was violated. Full Kafka. The fact that people still put up with this is a clear evidence of Google's monopoly position.
Just because this usage of "unable" doesn't match your strict personal definition doesn't mean they are wrong. People generally agree and understand what "unable" means in this context, so you're kinda SOL.
Similar arguments apply to the Google store the person or more likely the software system that composed that email might be unable to reinstate the account, but Google most definitely is. But Google is merely unwilling to do so.
Notice that the policy is clearly stated in the rejection and there is no ambiguity.
They could write "We generally do not issue refunds for items outside of warranty" and they're back to the statement being just one level more vague, and thus more true.
But in reality, both of those mean the same thing. Writing "We don't issue refunds outside of warranty periods" has an understood "excluding exceptional circumstances". Everyone knows it's there. Only people who are pedantic to the point of uselessness will argue about this, and you'll find out that the courts generally have little sympathy for that.
All human languages so far are inexact. Math is probably the most exact language we've invented for communicating ideas, but languages that the general public knows are all inexact.
If the correct thing is communicated unambiguously, that's already a success, even if a pedantic person can say "I know you mean that you don't 'generally' do it, so the absolute there is a lie", the fact that the pedant can point it out means they absolutely understood what was being conveyed correctly.
When you write in and ask them, please steal a million dollars and give it to me, while they might be able to figure out a way to steal and give it to you, for policy and job performance reasons they are unable to. They say - "I'm unable to do that for you". Who cares if they somehow could - we all understand they have chosen not to.
We are unable to reinstate your account = person responding does not have policy authority to reinstate your account and the exceptional circumstance was not identified.
"As your product is out of warranty we will not be issuing a refund."
Sounds rude, right? Because it draws attention to the fact that the decision is, at some level, completely arbitrary. But if you have your left hand write the policy and your right hand enforce it then you can say.
"I'm sorry but I'm unable to issue a refund because your product is out of warranty."
Makes it sound like that's just how the world works, doesn't it? And you come away feeling like "aww man they can't" instead of "they won't, money grubbing assholes." Customer service is, at its core, about managing emotions and often delivering bad news in a way that preserves the company's image.
If you pressed me I would admit that yes, in some exceptional cases we issue refunds for products outside of warranty but we're not doing so in this case because [whatever, the product broken due to misuse, etc.].
To say I am not issuing a refund or that I do not issue refunds on out-of-warranty is truthful or reasonably so. It's perfectly possible to communicate that without being rude or claiming to be "unable."
I'm amazed at the people defending this type of verbiage... like at all.
The thing is that they can't bend the policy for certain players without being sued for unfair business practices/anti-competitive behavior, which is why Google has to enforce it on everyone if they want to enforce it on anyone.
You're right that this isn't solely a faceless corporate thing. People say "I can't" when "I won't" for the same reasons Google did. We even ask "can you watch my kids?" Again, the same reasons drive the language. It lets a false but face-saving implication stand: You will pick up my kids if you can and if you won't than I'll assume you couldn't.
We also "ask" our employees or waitresses to do things, even though it's technically an order.
All this is good and fine. Language is supposed to embed cultural niceties that speak to our values and smooth relations between people.
The Orwellian shit comes in when it comes in. These cross from figures of speech into euphemization and the Orwellian point is that these things run deep. A bank manager is literally unaware of where her own prerogatives, organisational norms, hard corporate policies and regulatory rules begin and end. They are constantly implying (and thinking) that whatever is annoying/abusing their customers is not because of them. Usually it is.
"Can you X" literally/etymologically means "are you able." If you were to translate, you'd translate to "will you." I don't think the etymology is coincidence, or lost on an average speaker. "Can you" feels softer and more polite. It reflects something about how people want to interact with each other. You have to leave work early because your mom "couldn't" watch your kids, not because she refused to.
Incidentally, in Ireland we do say "will you," "would you" and "are you able to" more commonly than "can you," which sounds slightly american/international to my ear. Oddly (or not), "will you" is (IMO, locally) more informal. You'd say it to friends, when making trivial (pass the salt) requests. "Would you not" is also (I think) an irish choice of words. It's used to make suggestions, rather than requests.
These things don't bother us until/unless they're coopted into a different context, and used tactically. Going back to the original point about corporate-drone speak.... The "can't" vs "won't" language is used in the first person to obscure responsibility. I didn't call it Orwellian because it's evil or onerous. I called it Orwellian because it affects culture/thought deeply. The language helps maintain an impenetrable ambiguity, implying that every contentious decision is actually not a decision. It's dictated by regulators, or at least by "corporate."
Also, in context... it's an (passive?) aggressive way to cut off a conversation. The equivalent of "Good day Sir!"
“Unable” is dishonest because it passes responsibility beyond the veil of the typical user’s ignorance. We’re so used to this sort of language that we’re conditioned to allow it even when we know it’s bullshit. It shuts down discussion and allows its wielder (inevitably a corporation) to avoid explaining itself. In the developed Western world we have a big problem with letting corporations do whatever the hell they want without explaining themselves, so I don’t think we should let them get away with this sort of thing anymore, and not being satisfied with mealy-mouthed evasion is one of the first steps down that road.
Hell, they can even change their policies if they want, so they aren't really "unable".
If you tried hard enough, you could probably manage this.
You can do it if you are stonger than the other person. You may not do it, according to the law.
Were the implied statement made explicit, then yes it'd be accurate.
Unable due to their policies, which they wrote and they can change (and which they often choose not to follow anyway).
I agree with OP - it’s not that Google isn’t able to do this, it’s that Google doesn’t want to.
The audience is laughing because this notion is ridiculous.
The rep in TFA uses "we," referring to Google. Google is able to reinstate accounts, and The Google Ad Traffic Quality Team is able to reinstate accounts depending on their judgement of whether someone is violating policy. If they are not able to reinstate accounts, can you explain to me why they're adjudicating account ban appeals? Do they say "no" to everyone?
The key point here is that the agent(s) are responsible for interpreting the policy. They have decided that Droidscript violates their policy, and I personally have no opinion about that. But to imply that it's "out of [our] hands]" is dishonest.
Just say "upon review we've determined that your app violates our policies so we will not be reinstating your account."
That is not equivalent to what's happening here. There is no law preventing Google reinstating the account, and corporations don't have morals because they're not people. The only thing preventing them doing it is that the employees involved choose not to.
When someone points a gun at a cashier and says "this is a robbery and I'm gonna shoot you if you move a muscle," the cashier usually uses their ability to hold still out of concern for their safety.
The distinction matters.
We've gone from a world where we can run any software on our devices, to one where Apple and Google tell us how we can make money, what we can run, and what speech is permitted.
It's Orwellian, but with corporate greed instead of nation state fascism.
Though FWIW I’m unable to disagree.
I’ve sometimes spent hours crafting a single reply to politely decline a request. It’s not even proportional to the prospective importance of the customer, it’s a matter of respect for all potential users.
Also, sometimes, investigating the issue to determine the right words has revealed hitherto unknown problems, uncovered new possibilities, highlighted alternative solutions that may be palatable, or even changed the outcome entirely.
Discussing the language used is actually thereby more productive & constructive than simply piling on Google for being careless, callous and pompous yet again.
Non-plain English is usually a flag that the person you’re dealing with is not smart.
The usage of plain English words is something we do strive to see more often. We do hope that it can be utilised more often going forward. Unfortunately at this time we advise that the occurrence of non-plain English may indicate a violation of our MTC (minimal thought capacity) guidelines.
I don't think it's that they're not smart. It's that they have a separate agenda, often deflecting responsibility. People often use this sort of indirect wording even without consciously realizing it.
They share your phone/email with lots of dealers if you request a quote and don't read the fine print like I didn't...
You may see “we” as the company itself setting its own policy/ process
> We understand that you may want to know more about the issues that we’ve detected. Because this information could be used to circumvent our proprietary detection system, we’re unable to provide our publishers with information about specific account activity.
> Once you’ve made changes to your site(s), app(s) or channel(s) to comply with our programme policies and terms of service, you can reach out to us using our appeal process. Please make sure that you provide a complete analysis of your traffic or other reasons that may have led to invalid activity in your appeal.
I realize that the term Kafka-esque is a bit overused nowadays... but this sounds exactly like a plot summary of Der Process.
"That's none of your business."
"How are we violating them?"
"I'm not going to tell you."
"What can we do?"
"Fix the issues, and then appeal."
"I've said too much already."
Obviously I can't say "of the last 2500 ad clicks zero of them had any mouse movement over the ad before the click event" because then the publisher obviously just fixes their fraud software.
This isn't specific to Google or even advertising. Every company has figured out when dealing with abuse and fraud sharing the minimum amount of information is beneficial to the health of the ecosystem as a whole.
It could be better if the issue is fixed, but Google’s skin in the game is small enough it doesn’t matter, and they already chose to get rid of the publisher as an efficient solution.
This ban is not only not explaining how it detected unwanted activity, it is not explaining what activity it detected.
"We detected you faking ad impressions, though we won't tell you how we (believe we) know" is very different to "We detected you (or your app) doing something wrong, stop doing it and you will be fine. We won't tell you what you did wrong".
There is an opposite side to this. If you have a human appeal process your accuracy drops to min(ml_model, humans) and I have bad news about which one of these is smaller.
(What you said makes sense to me, I just don’t know if/how ad networks could differentiate between the account-owner committing fraud and a malicious 3rd party.)
Not saying this is what was happening here, or that y’all didn’t think of it; just a general observation I’ve noticed over the years.
It's not a situation like putting someone in prison where "beyond all reasonable doubt" is the appropriate mark; you can refuse to do business based on mere suspicion that may be mistaken. There's a limit where extra investigation or appeals is too costly compared to just accepting the lost revenue, and for small-scale customers, that limit is quite low. With fraud detection, you have to balance the tradeoff between false positives and false negatives, but you'll certainly have both.
We viewed them as a cost of doing business. Some small accounts got nuked. :shrug: If we had to have humans investigate everything, and produce reports / interpretations that nuked customers found satisfactory, we wouldn't have been willing to service accounts under probably $40k/year.
And keep in mind the ecosystem is filthy with fraud, particularly on the low end. There very much are groups of organized thieves actively exploiting adtech.
And as @PeterisP says... look, we're not a court. We're a private business that is refusing to do further business with someone. Our right to do this was very clearly explained before the beginning of any relationship, and agreed to by that someone. If that someone doesn't like it, their recourse is to not do business with us.
So you are basically justifying Google behavior. You are not a court, that's right. But every ban process should be easily and quickly prosecutable to settle the issue right in a court.
Obviously real fraudsters will never appeal like that, because they know they would incur in even bigger problems.
I now know you have no experience at all fighting online fraud. People, um, lie.
On a serious note, if you require a prosecutable ban process -- whatever that means, because prosecuting is something the government does -- where you'll end is my original point. Ad companies will refuse to do business with publishers that aren't above some minimum threshold. My guess is $40k a year. Because remember eg google or whoever keeps about 1/3 of that money, so a $1k/mo minimum to staff humans and deal with arguing feels ballpark reasonable.
Separately, I'm not justifying anything. I'm explaining the economics driving behavior. If you want to be mad at me for behavior I don't control or influence... :shrug:
There are literally no false positives. It may be fraud, it may be the ad is too close to a back button and gets accidently clicked, it could be the ads don't display right. But at the end of the day, it is a revenue decision.
It's a really entertaining read.
And yes, it perfectly matches this situation - right in the very first sentence already.
> As a result of a German lawsuit, Project Gutenberg has blocked Germany from viewing the Gutenberg web site.
I listened to the audiobook of that (read by Geoffrey Howard), and while the reading itself is fine, you'll want to avoid the editors' preface because it gives out some plot points and the ending. (PSA: if you make audiobooks, don't put editors' or critics' opinions anywhere before the end, even in ‘footnotes’. Only clarifying notes for unfamiliar terms.)
AdSense is the product where Google pays you for running banner ads; they can and frequently do kick people off of it for secret reasons. When my company was kicked off of AdSense back in 2010, I wrote about it extensively. https://www.choiceofgames.com/2010/08/were-banned-from-googl...
Google will never tell you why they ban people from AdSense, and there's no effective way to appeal. (They have an "appeal" process, but what are you supposed to write in the appeal when the charges against you are secret?!)
At least we can still publish Android apps, right? (We now run Facebook ads instead.)
But Google's email to DroidScript saying that the DroidScript app was removed from Google Play Store for "Ad Fraud" says otherwise.
Publishing status: Suspended
Your app has been suspended and removed due to a policy violation.
Reasons of violation
APK:206 Ad Fraud
App violates Ad Fraud policy.
If Google ever yanked our Android app over "ad fraud," we'd have no recourse. We've appealed our AdSense rejection a dozen times over the last 10 years and we always get a form letter rejection. We have no idea what they think we did wrong, and we never will, so we can never fix it.
Thank god we don't run AdSense ads anymore. Based on this, I never want to run them again!
So i say always maintain separate google accounts for individual apps, for individual usages. Separate your personal google account from your business account, from your ad-network account etc. And i would argue that each individual app should own their own account (you pay for this of course, because of the fee per account i guess).
Now the different accounts would still be linked to the same entity, and as stories like this make the news, more people would split their services on multiple account. As a response, it doesn't look like a stretch to me if next time Google would go after all account owned by the entity the deem responsible.
I mean, they already crossed the line of banning all services associated with an account. Wouldn't be surprising if they cross a few more lines as long as there is no critical impact for them.
And blood-black nothingness began to spin... A system of cells interlinked within cells interlinked within cells interlinked within one stem... And dreadfully distinct against the dark, a tall white fountain played.
So... having read through their marketing material, this is an on-device tool that opens up what appears to be most of the Android application API to at least the user of the device, and potentially to any Droidscript applications they grab from other sources, and... maybe to other apps on the device? It's not clear from a quick read how extensive the runtime control is.
So just right out of the gate this is defeating basically the entirety of the Play Store vetting process. Droidscript itself may not be engaged in advertising fraud, but it makes advertising fraud trivial to deploy. (And it needs to be said: this is the kind of app that would never have been legal at all on any version of iOS.)
Add to that that it's a closed source IDE for an open platform, and my intuition sides with Google here. My guess is that when details come out it will turn out that at-least-plausibly harmful Droidscript garbage was being pushed to users and Google decided to kill it.
Pythonista is a complete Python programming environment which provides access to camera, music, contacts, the network, and so on, and has been available for iOS since 2016. What specifically distinguishes Droidscript from Pythonista such that you think Apple would reject Droidscript?
If Droidscript required unknown sources to do anything (not just APK exports), then other apps could check the unknown sources policy on the device and disable certain intents (which they may do anyway at the moment, since that would mean that the applications installed may be untrustworthy). But this way there isn't any way to tell.
Please don't equate trust with any app store like that. Firstly, many incidents have shown that this blanket trust isn't warranted, and second, the final arbiter of trust is the owner of the device, not the owner of the app store.
This is a poor assumption to make. Any data coming into your application should be assumed to be malicious. This would be the same as a server just accepting any data made to its API calls without any validation.
When we get permissions boiled down to one or two popups we end up with issues providing accurate privileges to applications (and might be forced to allow WhatsApp to trawl through our contact list if we ever want to send a picture in it).
Granular control shifts the power to the user and allows programs like this to have more fine tuned privileges.
Given the amazing strides in usability we've seen in nearly every other field it baffles me why everyone isn't onboard with the fact that we can take the learnings from elsewhere and bring them to the domain of permissions.
Permissions are almost always hierarchical and grouped into classifications that make it easier to present the user with fewer more meaningful choices than asking the user to approve whether an app can see each contact on their phone one-by-one.
I'm honestly a bit cynical (puts on tinfoil hat) that marketers have held us back here since a lack of granular permissions aligns quite well with their effort to grab as much personal data as possible.
Consider the permissions for the lowly keyboard app...
A proper understanding of fine-grained permissions basically requires a working knowledge of how that permission might be or has in the past been abused.
And ultimately, fine-grained permissions are probably answering the wrong questions. The user expresses some basic trust via the initial app installation; what permissions ultimately help with is deciding whether or not to keep trusting the developer. If the app ask for lots of unexpected stuff, it's probably malware and should be uninstalled. If the permissions seem reasonable, the app is probably fine, and the user just wants to delegate responsibility to the app to do what it needs to do to get shit done.
It's really /all/ about trust. If you can't trust a random app, installation is a high-friction event. Check the stars, number of users, read a bunch of recent reviews, carefully go through permissions providing access for exactly what's needed. If you /can/ trust a random app, you can just install it, use it to read the fscking QR code and go on with your day. The need for trust is why we've ended up with centralized app stores with stringent content policies, and all the false positives that come along with it.
Imagine if this were applied to actual laws enforced by the police. "You're under arrest but we won't tell you what law you've broken, because then other criminals might use that knowledge of the law to avoid being arrested. And by the way, a secret court has sentenced you to life imprisonment and all of your appeals have been denied."
I'm not endorsing these US policies, but it's worth noting that even in democratic law enforcement it's accepted that the system isn't always bound by transparent policy and process. There are usually justifications for keeping some things secret and discretionary to enhance law enforcement effectiveness.
Suppose it is true that these companies can't reveal their decision making because there's so much to be gained by bad actors that game these highly centralized systems.
Then it seems like a larger number of smaller firms could be more transparent and still achieve the same effective level of security.
This is the epitome of security by obscurity.
Uh... Seems like the actual problem (given that scenario) is that adware is being pushed to users, not whether or not Google defended its ban in public. Complaints about customer service (from everyone, not just Google) are a dime a dozen, actual user security is clearly more important, right?
Your answer presupposes a frame where Droidscript is innocent. What if it's not, and it knowingly nodded to a community of junkware being pushed to its users (again, I have no evidence!). In that case you'd want it banned without "decency", right?
Now, this was a special situation. I had a personal relationship with the developer, and I was happy to vouch for their honesty. Yet it still turned out Google had been right all along. Now, it's a shame Google couldn't let them know what was the issue. However, it's a safe assumption that the vast majority of people Google support deals with are spammers. And there's a lot of them. If Google gave a detailed explanation to all of them it would mean a ton of additional work – which would create an unsustainable situation at this scale.
You describe a situation where Google was going to put a whole company out of business -- probably ending your friend's job, as well as that of many other honest people -- rather than give them the information they needed to fix the problem. And you think this is reasonable, because it would be "a ton of additional work" for Google? We just have to accept people losing their livelihoods as collateral damage in the war on spammers?
Imagine if we applied the same logic to the government. If they think you committed a crime, they just toss you in jail and don't have to tell you why. They could catch a lot more criminals if they didn't have to waste time prosecuting them!
No, we need a Habeas Corpus for tech companies. If you are banned, you have to be told why. Make it a law. I don't care if it results in more spam.
> No, we need a Habeas Corpus for tech companies. If you are banned, you have to be told why. Make it a law. I don't care if it results in more spam.
The whole ordeal seems like an attempt to educate app developers by whipping, where the victims have to guess what they did wrong.
I don't think that's reasonable. What if most are spammers ? Better let a few spammers in than treat someone unjustly. Why would it become unsustainable ? I've seen this argument repeated ad nauseam, but have yet to see proper proof.
In this particular example, a copyright violation was detected in a image, so an automated response "someone else's image was used without permission, violating copyright" seems entirely plausible.
Look at the tone-deaf example this employee just shared. All they had to do was say in the same email that they used to ban someone "you have copyrighted images".
The moment they find an infraction they could literally take a screenshot, say "the problem is X" and email it, which would incur the 5 seconds it takes to add a screenshot and say the problem you already identifies, but make a world of difference for developers.
This nonsense about "it's to stop spammers" isn't about the cost, the laughably bad logic Google uses is that by identifying what rules you broke, spammers will get better at not doing stuff Google catches...
As if the spammers don't already know what they did to get caught!
Google should not be enforcing copyright in the first place without at least a report of infringement by the copyright holder - and in that case they should pass the report along to the developer.
> which would create an unsustainable situation at this scale.
Financial sustainability may have something to do with it, but I suspect the larger issue is that providing too much detail essentially trains malware authors to route around the company's defenses.
Imagine the Play Store as a castle which has both good townsfolk coming and going as well as being perpetually under siege by a malicious lord. Sometimes, the castle's defenses inadvertently prevent a townsperson from getting to market to sell their onions. When the townsperson is like, "Hey, I can't get in to sell my onions." it's helpful for the castle defenses to be like, "Well, we have the portcullis raised from 9am-11am on Tuesdays and the gatekeepers listen for your accent to decide if you're a local or an enemy."
But that's, like, exactly not what you want to say if the "townsperson" you're talking to is actually an enemy spy taking notes.
>"Rough consensus, and running code. We are not the Protocol Police."
Half the problems we have nowadays is because we have manufacturers playing "the Program Police", which leads inevitably to the point you just made.
You are now, like it or not, adversarial to any User looking to do anything you find unconformant with your bottom line. You cannot solve these issues by whitelisting, just like you can't solve the problem of crime by whitelisting, and hiding the conformance suite. If you can't know the test, you can spend infinite cycles changing the wrong thing to comply with it, and I do not find that to be a tenable state-of-affairs to push on users, even if intentionally aimed at the malicious ones. This is the same problem we have in meatspace with our overly byzantine legal system; but nobody accepts that secret laws are a good idea because if everyone can read the law, it's a national security risk. At least no one without some serious conflicts of interest.
Do you really think that your company is going to nail down a good solution to a problem that society at large can't even handle reasonably? I mean, think about it. This really is a subset of the general question of how to keep everybody doing something productive. I don't even need an answer. I just want to encourage people to think.
This model absolutely does not work when it comes to creating spaces where humans interact. There are bad actors and someone has to police them or they will abuse other users.
If you run a bar, you have to hire bouncers. It's simply part of the cost of hosting a safe venue.
Perhaps so, but it seems not unreasonable to have SOME ability to work with the creator of an app that's been on the store for years with a substantial number of ongoing users and (speculating) a non troublesome patten of installs and purchases.
Nobody believes that Google is technically out financially unable to do this, which leaves the other option - at a corporate level not giving a shit enough to even bother trying.
Google will often do the right thing whether by plan or by happenstance, but it pays to be aware that when it does the wrong thing there is no recourse and will be no correction.
You do have an ethical obligation to inform them of what policy was violated with sufficient detail that a good actor has a reasonable chance of complying with your policy.
I think that this should be required of any company that to provides publicly available goods/services, not just Google. This doesn't just help with monopolies, but also makes it harder to hide racism and censorship behind opaque policies.
I bet you indent your code in an inclusive way
you see exactly this happening all the time here on HN. the sentiment for the past few years is abysmal. google is actively blowing up their power user/developer customer base. looks like a metric somewhere got optimized a bit too well.
No they weren't. It was not right to terminate the entire app because someone used an image wrong.
But it's all good, Google is a private company™ and can do whatever they want®.
Due process isn't really a sound concept if it's only for innocent people.
Google itself is adware.
If I can't ship my closed source IDE on the platform is the platform really open?
> My guess is that when details come out it will turn out that at-least-plausibly harmful Droidscript garbage was being pushed to users and Google decided to kill it.
Of course they will say it was because x, y, and z were done to protect the users. But is it really for the users' benefit or just about control over their walled garden?
For clarity: the Play Store is not an open platform. The Android API being exposed by Droidscript very much is.
Yes, I'm sure Google will carefully release details that paint them as the good guy. Certainly, we don't want to be needlessly unfair to them, but there is zero reason to give them free trust them at this point.
Best case is the right person sees this social media outcry, silently gets it fixed and Google moves onto destroying the next developer.
No more than being able to build an app on my laptop and push it over ADB.
> (And it needs to be said: this is the kind of app that would never have been legal at all on any version of iOS.)
It also needs to be said that this is why I don't use Apple devices. What they inflict on their platform is not an argument for what should happen elsewhere.
When did we collectively decide that programmable computers were a Bad Thing?
I thought the parent comment was speaking generally. Maybe I misunderstood.
Either way, I was thinking of downloading Droidscript as one way of saying "I want arbitrary code exec."
I think it's reasonable for Google to say "Most users don't understand what this implies, so if they want this they'll have to get it outside the Play Store."
That said, that doesn't actually seem to be what they're saying here.
I just think it's a reasonable stance to hold as an App Store.
I think that this is what has happened. The author of DroidScript claims that
> Unfortunately we also have to inform our users that we could no longer support AdMob for use in their own apps either, because we can't test it anymore and can't guarantee that Google won't treat them in the same brutal way.
So apparently users were able to do stuff with AdMob on DroidScript's back, and maybe AdMob registered these fraudulent actions with some Google-ID which was assigned to DroidScript.
Yup. Check out aurora store. It's a open source frontend to the google play store. All apps can be installed(except of course paid apps. Though if you bought the app and sign in to the account with aurora you can)
Compared to what? If someone wants to run a random APK that has some kind of ad fraud in it, they very easily can even if Droidscript doesn't exist.
Exactly, iOS is not an open platform and Google has decided they want to be more like iOS.
I am not an expert in JS or the Android API, but I wonder if you couldn't do it automatically? If types line up closely enough, I would think that you could get a list of Android APIs (pull it from AOSP if you have to) and mechanically translate to a JS API.
If droidscript enables ad fraud, isn't it an issue with how the android sandboxing model is fundamentally broken? Given that there are far more people using phones than computers, and a lot of new smartphone users will have never used a desktop or laptop computer, droidscript might be their first venture into programming and/or hacking. Let's not shut it down.
Also the product has a very heavy emphasis on security, the security team is superb quality and well funded, and Google know that the team is trustworthy.
My gut feeling says these devs aren't telling the whole story.
You can code up Garbage in Java just fine and get it on the app store. I've seen apps send passwords in plain text....
You mean the one that doesn't exist?
I wouldn't really be surprised if EVERY scripting/programming app in the play store technically violates some play store rules, though.
You can later eject it to an Android Studio/Xcode project if you want.
Define "fixed", it was removed from Play Store but anyone can still install from APK or F-Droid, right?
If that's the argument I can sort of see Google's point here. The Play Store is supposed to be curated and the application should follow certain guidelines. This tool as I understand it effectively provides a loophole that lets people run non-curated code without jailbreak. I know that Apple removed apps for similar reasons in the past.
TFA is a bit misleading, the whole "AD FRAUD" angle is frankly irrelevant, it's just that since Google considers that the app violates the guidelines it can't be eligible for the ad program.
Installing non-curated apps has always been supported on Android - no jailbreaking required. Just get an APK either straight from the developer or through any number of alternative app stores, open it, click the "yes, I'm sure" option in the security popup and you've got yourself an app.
Also, according to DroidScript itself, Google accused them of ad fraud, so maybe there is something there.